Well-known science publisher Springer retracts 64 papers after discovering fake peer reviews

August 19, 2015 • 11:00 am

As every professional academic knows, especially those at “research” universities like mine, publishing research papers is the currency of professional advancement. Teaching and “service” (i.e., being on university committees or editorial boards of journals) will be cursorily scanned when it’s time for tenure or promotion, but we all know that the number of papers in your “publications” section, and where they appeared, are the critical factors. As one of my colleagues—now very famous, but I won’t give names—once told me about tenure and promotion committees looking at publication lists, “They may count ’em, and they may weigh ’em, but they won’t read ’em!” Indeed.

Grants received often count too, for universities just ♥ “overhead money” that they get as a perk from the granting agency (this can be as high as 70% or more of the monies awarded to the researcher). Grants, however, really shouldn’t count, for it’s now very hard to get them, and at any rate they’re simple a means of procuring funding to do research—and it’s the research itself (judged through publications) that really counts. Granted (no pun intended), it’s hard to do research without government funding, but there are other sources, and theoreticians can often do their work with only a computer, pencil, and paper.

Here at Chicago, I’m proud to say that our promotion and tenure committee is explicitly forbidden from weighing grant support when promoting people to tenure or full professorship. That’s an explicit recognition that what matters is research, not dollars raked in.

The relentless and increasing pressure to publish, which is partly due to an increasing number of students and postdocs competing for jobs, has had a marked side effect: papers being retracted after publication.  There are several reasons for this, including an author finding out his or her data were wrong, someone else being unable to replicate the results (this is a very rare cause for retraction), discovery that the data were faked (sometimes found by others trying to replicate the results), and discovery that the “reviews” of a paper—the two or three independent appraisals solicited by a journal before deciding whether to accept a paper—were fake. The last two come directly from pressure to publish and advance one’s career.

The “fake review” problem is increasing, and, as the Washington Post reports, the eminent scientific publisher Springer has just retracted no fewer than 64 papers published in its journals, all on the grounds that the reviewing process was undermined by fakery.

The list of retracted papers, from  Retraction Watch, is here.  All are by authors with Chinese names, and the journals are reputable ones, including Molecular Neurobiology, Molecular Biology Reports, Molecular Biology and Biochemistry, Tumor Biology, Journal of Applied Genetics, Clinical and Translational Oncology, Journal of Cancer Research and Clinical Oncology. 

A retraction on these grounds, of course, doesn’t mean that the paper was wrong, or the data faked, but that somehow the authors or the journal (in the journal’s case, sometimes inadvertently) bypassed the normal review process. That seems especially serious for papers related to cancer, as are many of the ones that were retracted.

How does this happen? After all, traditionally journals would ask two or three good people in the field to review a submitted manuscript anonymously; the reviewers would tender their reports; and the editor would make a decision. How can that be subverted?

Easily—there are at least three ways.

  • Authors can suggest people to review their manuscripts, but give fake names and email addresses. They can then write reviews of their own manuscripts (positive, of course), bypassing the normal process. I’ve always objected to the practice of soliciting potential reviewers’ names from authors, and ending that is the obvious way to stop this brand of fakery. Besides, what author would suggest the name of a reviewer whom he/she didn’t know would regard the manuscript favorably? Asking authors to suggest names is both lazy and undermines objectivity.
  • Journals themselves can commit fakery if they’re desperate enough to want to publish papers. The Post notes how this is done: “In July, the publishing company Hindawi found that three of its own editors had subverted the process by creating fake peer reviewer accounts and then using the accounts to recommend articles for publication. (All 32 of the affected articles are being re-reviewed.)”
  • Increasingly, journals are farming out the work of reviewing to independent companies who, for a fee, receive papers from authors, get reviews, and then send those reviews to journals that either the company itself suggests or the author deems appropriate. Many journals—but not the good ones, I hope—will accept these reviews, which they haven’t solicited, as sufficient adjudication of the paper. If the reviewing service is unscrupulous, they could get nonobjective or fake reviews in several ways. (In the case of many articles recently retracted, these reviewing services invent fake reviewers and provide bogus reviews).

Now that these scams have been revealed, journals are trying to do something about them. After all, it doesn’t help Springer to develop a reputation for publishing substandard or improperly reviewed papers. As the Post reports:

Publishers are starting to implement policies aimed at preventing fake reviewers from accessing their systems. Some have stopped allowing authors to suggest scholars for their peer reviews — a surprisingly common practice. Many are mandating that peer reviewers communicate through an institutional e-mail, rather than a Gmail or Yahoo account. And editors at most journals are now required to independently verify that the peer reviewers to whom they are talking are real people, not a fabricated or stolen identity assigned to a fake e-mail account.

That’s a good start, and will take care of many of the reviewer problems. I’d still like to see the end of independent third-party reviewing services, as they’re just ways that journals fob off their own responsibilities on others, and they provide avenues for corruption.

Further—and I don’t know how to do this—we need to relax the relentless pressure on younger researchers to accumulate large numbers of publications, and we need to concentrate more on quality than quantity of papers.  One reason for this pressure is the growing number of advanced-degree students being produced by academics—students who have trouble finding jobs and therefore are compelled to pile up large numbers of papers to outcompete their peers. (“They may count ’em but they won’t read ’em.”) The combination of an increasing number of students and an ever-shrinking pot of grant funds from federal agencies—thus increasing competition, since grant proposals are awarded in part on the basis of an investigator’s past publication rate—is toxic.

h/t: Dom

66 thoughts on “Well-known science publisher Springer retracts 64 papers after discovering fake peer reviews

  1. I think we also need to rethink, at least in part, the mechanisms of peer review. It’s well tailored to the pre-Internet technology of (relatively by modern standards) small numbers of scientists producing work that can (and must) be distributed through the postal service. But today the volume of work is hugely bigger and we have technology designed to work with such volumes of data.

    There are mathematically-sound methods of guaranteeing secure and private and anonymous and authenticated dispersal of data and voting and everything else you can imagine. You could trivially design a system such that nobody knew who wrote the paper, who reviewed it, who approved it, who rejected it, and so on, and yet have supreme confidence that the entire process is sound.

    Something at that level may be overkill, but I think we at least need to start thinking in that sort of direction.

    Whether or not the current journal publishers are able to transcend their current working model is another matter. I wouldn’t at all be surprised if some upstart out of nowhere comes along and sweeps the whole field clean; that seems to be the typical pattern with paradigm change.


    1. I don;t see why the volume of papers is the issue. Yes, there are more scientists and papers these days, but that means there are also more reviewers available.

      I don’t think there is anything basically wrong with the peer-review system.

        1. Yes, I agree with you on that one (though I’ve never heard of it in my research area).

          What I meant is that there is nothing *basically* wrong with the peer-review system, nothing that minor tweaks like that wouldn’t fix.

          1. I am always asked to recommend reviewers of my papers, and most journals do ask at least one of the reviewers I suggest. Nevertheless, they also usually find at least one other reviewer with a different perspective.

        2. I don’t think this is as bad as you think it is. In my limited experience (I’ve published about a dozen papers), judging by the tone and content of the reviews, if a manuscript is sent out to three reviewers, at most one of them will have been one that we, the authors, had recommended.

          I assume that editors ask for suggested reviewers because the editor will often not be sufficiently familiar with the subject matter of the manuscript to determine who would be best-qualified to review it. In the absence of suggestions from the authors, I would think that the editor would likely send the manuscript out to authors of papers cited in the manuscript, but those are likely to be reviewers who the manuscript authors would recommend anyway.

          Of course, the system can be abused. A lazy editor could send the manuscript out only to reviewers recommended by the authors, and authors could recommend reviewers who have already privately given the authors positive feedback on the manuscript

          1. because the editor will often not be sufficiently familiar with the subject matter of the manuscript to determine who would be best-qualified to review it.

            The answer to that one is to expand the editorial board.

            1. A responsible action editor will not send the manuscript out only to reviewers recommended by the authors, and in making a final decision on the manuscript will take into account whether the reviewer was recommended by the authors. So the problem is not soliciting suggestions for reviewers from authors, but the minority of editors who are irresponsible. A practical solution to that problem, which some journals have already instituted, is to hold editors responsible for their decisions by publishing their name with the article.

            2. I’m not sure that’s always feasible, with all the narrowly nested niches-within-niches out there. I’ve worked on some obscure topics with maybe 20 active researchers worldwide, all publishing in 10 different journals. There are probably not enough of us with academic credentials to fill associate editor posts at all the journals.

      1. If the majority of submissions are written by students, then you have a pyramid social structure. Beginning grad students can pump out a lot of low-quality papers but they can’t really be enlisted as reviewers.

    2. I think a lot of people feel this way, but there hasn’t yet been a snowball around any specific new paradigm. I think the process is incrementally moving toward this model: authors publish directly into preprint servers like the arxiv. The authors may send links to their preprint manuscripts in lieu of submission to peer-review. Journals appoint editors to solicit peer reviews on those manuscripts; the reviews may be fully open or not, depending on the authors’ and journal’s preference. The results of review may be used to assign a reputation score to the original manuscript, optionally annotated with full-text reviews, authors’ responses and public comments. In the end, the journal publishes a curated list of links to papers that have succeeded in this process. That doesn’t need to be the end of revision and review and discussion, which can still take place on the manuscript server.

      In my field, we’re moving very close to this process, except we don’t yet have good mechanisms for reputation scores, public comment, open peer review or ongoing revision/review. These features increasingly take place on blog networks, but I think it would be useful to build them into the peer review process.

      There are still some big philosophical differences on whether peer review should be open or private, anonymous or signed. I lean towards fully open and signed reviews, which would at least catch these cases of fake reviewers.

    3. You could trivially design a system such that nobody knew who wrote the paper

      Woah, hold up there B& so all papers end up without authors? Like a Wikipedia anonymous sandbox?

      1. Not sure, but I think he probably means that none of the people involved in the review process will know who the authors of the paper are, for the purposes of the review process only. Not published anonymously.

      2. The math would let you anonymize and de-anonymize at whatever stages you felt were appropriate. I’d expect you’d want to preserve anonymity up through some final point of blessing, at which point you’d likely want to reveal everybody who positively contributed to the results.


          1. That would be a policy decision. You could implement it however you wanted; the math is very flexible.

            I would tend to think that you’d want full privacy until “blessing” and then full disclosure after that, but that would require some thought.


          2. I’d say absolutely to that. After the process is complete, fully open for anyone who cares to take a look.

        1. This is already widely practiced in my field. It has its advantages and disadvantages. For one thing, in a niche area the authors can usually be deduced. For another thing, author expertise and resources are sometimes relevant to the paper’s content, and can sometimes be an indicator of possible fraud. For example, if I get a paper from a high school English teacher that describes results from a million-dollar electronics experiment, I would probably want to give them a call before sending the paper out to reviewers.

    4. The notion of keeping it all anonymous might work in some areas of science, and this is where it is probably most important, but in many fields outside science it is likely to be impossible.
      In my particular area of law I could probably identify the author of any article to be reviewed in Australasia and make a good guess for any other country likely to come my way. Mind you what a reviewer outside the more quantitative fields is looking for may be quite different- usually a coherent argument properly based in the literature and which adds something to it. Unfortunately that can often morph into “I don’t like the politics of the article’ .

    1. I’m curious about the strange little fact thrown in there re ALL the retracted papers “are by authors with Chinese names.” That’s weird.

      It might simply mean that it’s easier for scientists to give a pass to an unfamiliar name if it’s foreign and/or from a very large country. Or I suppose it could indicate a particular problem with China (my understanding is that virtually 100% of medical papers coming out of China on alternative medicine are positive.)

      I don’t know. But that’s an interesting aspect of a discouraging problem.

      1. As my native language is not english, I was from time to time contacted by a company offering to take in charge not only the english writing of my papers, but the whole process of manuscript submission (and it was far from cheap). Perhaps these unfortunate Chinese scientists did trust such a company and are actually as innocent as a newborn lamb.

      2. There’s a bit of a history of the Chinese government doing stuff that’s not entirely kosher to show their country in a better light. One I remember is the children not old enough to compete winning Olympics gymnastics medals in Beijing.

        There could be undue pressure being put on scientists by the government, so the government can point to the number of leaders their country has in the scientific community.

        I’m wary of assuming a Chinese-sounding name means someone is from China though – I’d like to see their institutions too to confirm.

  2. I agree with you about the toxicity, but the tendency for young researchers to publish more and more, and think less and less about the actual science, is a splendid example of “darwinian” selection.

  3. I tried to contribute to the peer-review process. Really, I did.

    Being self-employed though — it was way too big of a commitment (usually 1-2 weeks of research, thinking and writing required per each, on my dime). After a couple of these painstaking reviews were either countermanded by the (off-base IMO) opinions of either editorial staff or another reviewer — or after receiving papers that should never have made it past the editors in the first place, I have given up.

    Last week I received a request for review by Springer… followed by a stock form-request for other authors and their e-mails. I declined, letting them know why. The system is a sham, at least in my field – and I don’t have a clue how it will ever be fixed. Better open-publishing models & an absolute requirement that all taxpayer-funded projects provide not only the papers, but also public repositories for (properly redacted) source data AS A CONDITION OF PUBLICATION… THAT would be a step in the right direction, IMO. It kind-of breaks everybody else’s business models, so I’m not holding my breath.

  4. What we really need to do is cut commercial publishers out of the picture entirely. There is no need for them. All journals should be run by the learned societies on a cost-neutral basis.

    Such societies then have the motive to ensure good-quality peer review (as oppose to the incentive to cut corners to boost profits).

    The other benefit is reducing the absurd cost of the commercially published journals. Internet-based publication should be pretty cheap these days.

    1. That is another problem! The costs of even getting a pdf file of a paper can be crazy. Even after 20 years since they were published. Sheesh!

      1. Yes I recently looked up a short legal article (not through the usual university databases) in an academic journal from 1949. They wanted 40 UK Pounds for it.

      2. That’s become a major irony in academia. Lots of new cutting-edge research is published open-access. But you have to really pay through the nose to see the old obsolete stuff.

  5. Any thoughts (anyone) on how to measure quality? Even if restricted to a discipline?

    IOW, I agree quality of research should be more important than quantity, all else being equal, but …

    At least it is good to know that Springer is not totally mercenary, though. (Their prices are outrageous.)

    1. That’s what peer review is designed to do. Get enough people with enough reputation to endorse something and there’s a good chance that what they’re endorsing is of good quality.

      You can take that another step. If you’ve got lots of people with good reputation to endorse you, you yourself have a good reputation and can be at least somewhat trusted to endorse others.

      Obviously, there needs to be negative feedback in the system, as well….


    2. I would go with manuscript and researcher reputation scores. A system where any researcher can contribute numerical scores in various categories (novelty, tutorial value, rigor, etc), and those scores are weighted by the researcher’s own reputation or qualification score. The scores can shift over time, facilitating a continuous rolling peer review. I think something like this could be implemented by ResearchGate or one of the other emerging platforms for research networking.

      1. An advantage to such a system is that it would lend itself well to empirical analysis…and, if somebody figured out a better scoring algorithm or what-not, it could be retroactively applied to the entire corpus.


  6. Further—and I don’t know how to do this—we need to relax the relentless pressure on younger researchers to accumulate large numbers of publications, and we need to concentrate more on quality than quantity of papers.

    It’s even worse then that. Many researchers publish what is essentially the same paper in 3 or 4 different journals under different titles. The sheer volume of papers being written makes it hard to figure out that it is happening, without reading them.

    1. A very similar thing that I sometimes see is where they split what should be one paper into 2 or 3 papers.

      1. Related: Don’t you hate it when several references in a paper are to the same author’s “forthcoming” stuff? Sometimes this is legit, but when quite a few are like that, one wonders. Even the Association for Symbolic Logic journals let authors do this too much IMO!

    1. YES!

      Whatever you reward, that’s what people will optimize their efforts for. Pay people to write lots of papers, and they’ll write lots of papers.

      You want to make sure that what you’re rewarding is what you actually want to get from people. Do you want people to write lots of papers, or do you want them to make significant contributions to the advancement of human knowledge?


      1. That is the mechanism behind virtually every one of societies ills. My personal opinion is that much of it is caused by liars, cheaters and stealers gaming the system.

        In the case of peer review in science it is really ironic. Instead of basing decisions regarding how to best design and then regulate the peer review system on what is demonstrated to work best (i.e., experiment & observation, the scientific method), the same human weaknesses that science is supposed to account for are allowed to degrade the system just like in any other collective human activity.

        Also, I can think of a few very useful things to do with all the new scientists fighting to find some work. But if the work isn’t considered valuable by the people who control the money then who would pay for it?

        1. My personal opinion is that much of it is caused by liars, cheaters and stealers gaming the system.

          Indeed — and, yet, it is to be expected. To a large extent, we all want the best bang for the buck; they just think they’ve found a good way to maximize their personal profits and don’t care that their profits come at everybody else’s expense.

          That’s why you want to design your system such that the way to maximize personal profits (whether monetary or status or whatever) aligns well with whatever the overarching goal is. Make it so the path of least resistance is to, in this case, do good science. As a bonus, not only do you remove the incentive to cheat, you provide that much reward for everybody else.

          Of course, the devil is in the details….


  7. Another problem that might contribute to this is submission bloat. I was recently a track chair at a mid-size conference (in my field we consider conferences to be peer-reviewed venues). My track received over 150 technical submissions, and I was given about three weeks to solicit reviews (at least three per submission) and make recommendations. A significant number of those submissions were just throwaway papers, where a professor may have forced some students to write a submission without any guidance or correction. I’m sure this is driven by the “publish or perish” climate, but our system is setup with the assumption that every paper represents a genuine effort deserving of formal review. That is no longer true for possibly the majority of submissions. Some venues implement “editor’s rejection” without peer review, but that introduces ethical pitfalls of its own.

    1. I thought that nearly all journals allow editors to reject papers without review. I know (from sad personal experience) that the more selective journals do that.

      1. I don’t think it’s a universal practice, and it can go wrong. I was virtually blacklisted from one particular journal that desk rejected all of my submissions, until two specific individuals left the editorial board. I now publish there but lost nearly a decade due to these guys having a grudge against my doctoral advisor.

        For my conference I was in charge of a very broadly defined topic that included some cross disciplinary participants. In a number of cases I needed my reviewers to tell me that a paper was gibberish rather than just using unfamiliar lingo. Desk rejections still consume time, and

        I think the system is already breaking with the glut of junk coming from rapidly expanding programs across Asia. Two years ago I was “invited” to be a committee member for a conference managed by a Chinese group. Although I said no, it didn’t stop them from dumping a hundred papers on me with a deadline of Christmas Day. I looked at a few of them… total nonsense. I didn’t reply and don’t know what happened with their event or proceedings.

  8. Of course, just criticism of scientific peer review as currently practiced will quickly be appropriated by pseudo scientists eager to play the tu quoque game of “I know you are but what am I?” Instead of seeing correction of error as an intrinsic virtue of scientific research, they’ll try to use the fact that there are flaws as an excuse to throw out science all together– or at the very least extend the sloppy methods outwards so that they too may be included among the mainstream now.

    We’ve already seen them do this with Ionnidis (sp?) and his call for more rigor.

    1. I was just about to post that same point. I can see this story being gobbled up by post-modernists, creationists, climate denialists and various other science-haters and regurgitated every time they’re looking for a way to undermine science and the scientific method.

      It’s striking to me that whenever flaws like this are discovered in science(faked fossil remnants being another) the people who uncover the whole unflattering mess are always, without exception, other scientists. Not creationists, not cultural theorists, not investigative journos or religious epistemologists with a fetish for ‘other ways of knowing’ – nope, just other scientists, quietly going about their work.

      1. Yeah, good point, and obvious but not often made explicit. Creationists still cite the Piltdown Man fraud as impugning the credibility of evolutionists, but never add that the fraud was uncovered by evolutionists and paleontologists!

      2. If I were more involved (say still working in the philosophy of science) I would investigate whether any sociologists of science are investigating these matters. I suspect almost not, since as far as I can tell the field (a useful one) has been destroyed by decades of pomo. (Sane sociologists of science like Stephen Cole not withstanding.)

    2. Oh yes. The entire interconnected history of evidence and observation supporting the science, not to mention how it actually works really well whenever it is applied, somehow becomes null and void because scientists and the administrators that they have to deal with demonstrate that they are just as human as everyone else.

  9. Hmm. The little backwater of biology I sometimes bob around in is highly fragmented. Many groups, no one knows very many of them, sometimes no one but a paper’s author has gone near the group he studied in decades. As a result authors often have no peers.

    In that situation all a reviewer can do is verify that a paper conforms to the norms. The reviewer can’t check the data for accuracy or that the rows and columns in the data array are correctly aligned or that the arithmetic is correct. So every once in a while a real clinker — no question of fraud or stupidity or inappropriate analysis, just the author’s failure to check the work — gets into a first line journal.

    And then there are the third line journals whose reviewers don’t even know the norms. I recently alerted a friend at NRM to a somewhat surprising paper in a journal published by a museum. He remarked that his group ignores everything that comes out of that museum. The sad thing is that although the surprising paper’s authors are incompetents and idiots they’re sometimes right. And not everything in the journal is bad or mistaken. More grist for some mill or other.

  10. I have never heard of fake reviews, so that is a new one for me. Having fake publications is something that I have come across. Years ago our department was in the hiring process for a new faculty member, and one of our faculty had had found that one of the applicants had several important ‘publications’ in their CV that really did not exist. I thought it was pretty remarkable that one of us was so diligent to actually check on the publications.

  11. The fact that grant money is being reduced (and the point that grant money is often not needed for research) is interesting to me. A common accusation I hear from global warming denialists (who almost always happen to be laissez-faire capitalists as well) is that researchers are “sucking at the teat of public funding” or some similar language, and then of course they’ll reference stories like this about peer review retractions as well.

    I don’t know if anyone who reads this site has the data, but it’d be interesting to know what percentage or research in that field is actually funded by grant money. It seems to me like funding wouldn’t be a necessity for most papers as the global data is available from the agencies who collect it (obviously, this wouldn’t apply to stuff like Arctic field research). On the other hand, I have seen plenty of stories about conservative think tanks and corporations openly supplying money for “unbiased climate research.”

  12. Many are mandating that peer reviewers communicate through an institutional e-mail, rather than a Gmail or Yahoo account.

    I’ve had problems with that. Specifically, sometimes there’s a retired expert you’d really like to get, and they don’t have an institutional email. Although with academics this is less of a problem because their Universities tend to let emeritus professors keep an email address indefinitely. I also know professional scientists who have started their own businesses…and who use gmail for their business.

    Because people who switch jobs may not have a qualifying email and because universities are fairly lax about removing email addresses, I’d say this is not a very effective gate. Though if a journal has some sort of policy that allows for exception after internal review (and documentation), that would probably work just fine.

    I’d still like to see the end of independent third-party reviewing services, as they’re just ways that journals fob off their own responsibilities on others, and they provide avenues for corruption.

    I am an independent third-party reviewing service! At least some of the time. But not for journal articles, for other things which will go unnamed to protect the innocent. I have no problem with the suggestion that such organizations don’t necessarily fit this line of work, primarily because a journal or journal editor is not expected to have much vested interest in publishing any particular article. In the rare occasions when they do have a vested interest, that might be a time the journal hires a third party service, particularly if they don’t have additional editors in the relevant sub-field that can do the job.

    In any event, I’d say that if a journal wants to try a nonstandard peer review process, they’d better do it with a lot of transparency and documentation. If you want to experiment with your review process, okay fine, but tell your stakeholders how you changed it. As with everything else in science, your choice of one particular methodology over another is often not as important as reporting accurately what your choice of methodology was; as long as you accurately represent your approach, the reading scientists can decide for themselves whether your approach was crap and not to be trusted, or not. Yes forbidding certain methods is one way to stop corruption, but shining daylight on whatever practice is used does a pretty good job of it too.

    1. I am a case in point. So are all my closest associates. None are currently professional academics (some, including me, have never been), yet all have contributed greatly in infectious disease epi. And on the flip side, the most egregious train wrecks in my field are perpetuated by the academics – the professional modelers. And they have the upper hand as informational gatekeepers: editorial folks, the cliques of like-minded and similarly-biased (ideological) individuals recommending each other for peer-review, deciding whether to even allow rebuttal in their journal pages, etc.

  13. “we need to relax the relentless pressure on younger researchers …, and we need to concentrate more on quality than quantity of paper”

    One way to do that would be to count citations, not papers.

    1. One way to do that would be to count citations, not papers.

      Much too easy to game.

      “Hey, Dave — I’ll make you a deal. I’m low on my citation count. If you’ll cite me on this paper I just finished, I’ll cite you on my next paper.”

      “Throw in a beer at the pub tonight and it’s a deal.”


        1. It is? I stopped publishing a while back but I never had that happen. Yes sometimes the introduction/background section reads like a ‘past credits roll’ but I think that’s reasonable,* and I don’t think I ever had anyone suggest extra papers I should cite except my co-authors.

          *In today’s society, I would say that under-citation is a much bigger problem than over-citation. In my experience (which is somewhat dated), science has the opposite problem of what Ben suggests; lazy students who need cajoling to do even the minimal acceptable amount of citation and background research. IMO giving too much credit to the work that’s gone before us is not a major issue in science.

        2. Not in my experience, although, a lot of these shenanigans seem to happen in the bio sciences and not in physics, chemistry, etc. Or is it just my imagination? Is the competition tougher in bio? In physics these days, they just put everyone in the community on the paper!

        1. Considering you’d also be on the receiving end just as often, at least once you’ve worked up some referential credibility of your own, many would consider that a feature….


  14. Note this, from Springer:

    “The 64 articles that we have retracted represent less than 0.05% of the more than 100,000 articles Springer published in 2014. Overall, over 1 million articles are published in academic journals each year.”

Leave a Reply