A scientist says that peer review is obsolete

December 16, 2022 • 10:30 am

If you’re a scientist you’ll know this, and if you’re not, you should.  The key to success as a research scientist is publishing papers in good journals, and the more papers the better. Ideally, as a biologist you’d publish in top-flight journals like Cell, Science, or Nature.  Submitted papers are given to two or three anonymous reviewers who pass judgment on the paper, often deciding that it’s not good enough to be published (REJECTION), or might be published if some errors were fixed, new data analyses done, discussion modified, or additional experiments performed.  On the basis of the reviewers’ takes, the editor decides whether to publish the paper as is (rare), reject it outright, or reconsider it if the referees’ objections were met. All of us have to surmount these hurdles.

The peer-review system is supposed to guarantee the quality of a paper, but we all know it’s fallible. For one thing, reviewers rarely have access to the original data, and even when they do they rarely redo the statistical analyses of the paper’s authors.  We may spend a few hours reading a paper, but we have other things to do (like RESEARCH), and a few hours is rarely enough. According to Adam Mastroianni, a postdoctoral research scholar at Columbia Business School, the reviewer system, despite involving 15,000 years of effort per year by reviewers, has failed. He describes its failure in this article on his website Experimental History (free read, but subscribe if you read often). Click to read the paper:

Here’s why he thinks the review system hasn’t improved science:

Huge interventions should have huge effects. If you drop $100 million on a school system, for instance, hopefully it will be clear in the end that you made students better off. If you show up a few years later and you’re like, “hey so how did my $100 million help this school system” and everybody’s like “uhh well we’re not sure it actually did anything and also we’re all really mad at you now,” you’d be really upset and embarrassed. Similarly, if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.

It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning physics discoveries from the 1990s and 2000s because there aren’t enough of them.

Well, the flatness or decline of research productivity doesn’t say to me that review isn’t working, for there may be other social or economic factors affecting productivity. His link to new ideas “failing” to displace older ones goes to an article about how the incursion of novel ideas has slowed, and we’re “trapped in existing canons”. Again, that may have little to do with the reviewers of papers, and more to do with our gradually homing in on the truth. And of course new ideas have displaced older ones: the “neutral theory” in evolutionary biology is one of them.

But Mastroianni does have a point: reviewing is often hasty, sloppy, and unable to catch errors in papers. (He also cites the failure of much work to be replicated as a sign of the impotence of reviewers to stop bad science, but failures of replication can have many causes, including different populations or sample sizes, that have nothing to do with the prowess of reviewers.)

Where he makes his strongest point is citing studies where scientists run “hoax” studies in which they submit papers with deliberately added error. Those errors are caught only 25%-30% of the time. Also, if a paper is rejected or needs substantial revision, authors will often just send it to another journal (usually one that’s less selective), and eventually nearly everything can be published somewhere (there are 30,000 scientific journals!). But scientists are promoted and lauded not for publishing in low-quality journals.

So yes, the reviewer system is imperfect, often very imperfect, but what do we replace it with? We can’t just allow scientists to submit papers that aren’t even vetted, for then the journals, especially the very good ones, would be flooded with crap. So what is Mastroianni’s solution.

He doesn’t have one.

Here’s what he says:

What should we do now? Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back—I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.

Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it.

. . .Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lot of comments in particular. So I dunno, I guess that seems like a good way of doing it?

I don’t know what the future of science looks like. Maybe we’ll make interactive papers in the metaverse or we’ll download datasets into our heads or whisper our findings to each other on the dance floor of techno-raves. Whatever it is, it’ll be a lot better than what we’ve been doing for the past sixty years. And to get there, all we have to do is what we do best: experiment.

That’s not a good solution, as how do you find papers if they’re scattered all over the Internet? One way is to put your papers on the ArΧiv site, which doesn’t cover all fields, and let people have at them.  Scott Aaronson agrees in part with Mastroianni, but in further FB comments he argues that a system of reviewers (and appeals) really does improve papers.

Although Mastroianni makes a good case for flaws in the current reviewing system, I don’t think he makes a persuasive case to get rid of it entirely. There must be a way to exercise some quality control over papers, or otherwise we’ll have to wade through gazillions of papers by loons and creationists to find what we want.

Winston Churchill is supposed to have said, “Democracy is the worst form of government – except for all the others that have been tried.”  I think we can say the same thing about the scientific review system..

h/t: Ann

42 thoughts on “A scientist says that peer review is obsolete

  1. I spent much of my career dealing with peer review, both as author and as reviewer. It can be capricious (my first paper was initially rejected without comment by Nature and then accepted without revision after my major professor wrote a strong cover letter), and reviewers will inevitably miss some problems. One good development has been increasing use of double blind review, so that the reviewer, at least in principle, doesn’t know who the author is and vice versa. Also, the advent of preprint servers adds another source of comments, which can be valuable prior to publication. But overall, I agree with the conclusion that this system is flawed but we really haven’t figured out a better one yet.

  2. I edit a small biology journal with traditional peer review. Reviewers and editors often greatly improve the writing and data presentation in manuscripts by authors who are not native English speakers. This improvement to the published article doesn’t matter much to readers of the article who are native English speakers. Those folks can easily figure out mangled syntax or the effects of a bad Google translation. But improving the clarity and readability of a published article makes a big difference to everyone else who can’t easily write or read English. This service to readers in Africa, Asia, and South America is almost always overlooked by folks like Mastroianni in Europe and North America who take for granted that everybody can read & write in English as well as he can.

    1. I spend quite a bit of time correcting the English and making suggestions for ways to improve it, even on papers written by native English speakers.

      1. Yes for sure lots of other folks benefit from your editing! Do you find that the big qualitative improvements in clarity and readability most often come from editing papers by non-English speakers? That’s my experience.

        1. Generally. On the other hand, in one case I re-wrote parts of the text as examples of how the rest should be altered (and made a point of telling the authors this), and when I got the revision back, my alterations were included intact and the rest of the text was unaltered.
          This was a submission to a fairly prominent journal.

      2. I can’t speak to scientific journals, of course, but when I was on law review in law school, I spent a fair amount of time trying to translate US law professors’ prose into Standard American English.

  3. Do we know if all citations are “good” citations?

    I.e. a paper getting “highly cited” because it is a “bad” paper?

    Otherwise, peer review or not, citations are one form of currency…. presumably, all highly cited publications are the product of peer review. Dr. Mastroianni is relying on a lot of opinion – not bad opinion, but my suggestion IMHO would add quantitation to weigh the claim.

    Jokes can be funny but he’s all, like, “ooo, why not get all colloquial with our writing” and stuff to be all accessible like common journalism
    [ skull emoji ]
    [other emojis that really old people are like “duuh, what is that” ]
    [ peace sign emoji ]

    ^^^* example of why rules for writing are important and have distinctions between intended audiences.

  4. Looking at Nobel winners misses the point. It’s the other tail of the distribution that peer review hopefully tamps down

  5. Mastroiani’s solution seems to be crowd-sourcing.

    I can think of maybe a couple examples where that worked. First, it’s proven useful to skeptics debunking extraordinary claims. There’s a UFO sighting with enough corroboration to be legitimately unidentified, but so far the experts can’t figure out what it is and the gullible are having a field day talking about alien spacecraft. The details get posted and reposted on various science geek sites and eventually makes its way to someone who knows a whole hell of a lot about visual illusions projected in certain ways on certain types of fog in certain conditions. Reflected headlights it is.

    Crowd sourcing has also proven useful at catching plagiarism, since there is now a large group of readers which might include someone who thinks one of those passage looks mighty familiar.

    But neither example really supports the idea that useful crowds will interested enough to review the incredible mass of research out there, especially if it’s not particularly exciting in itself, If it’s controversial the opposition may comb through it looking for fatal flaws and gotchas, but again, most papers don’t meet that criteria. So if crowd-sourcing is indeed Mastroianni’s substitute for peer review, it’s probably not workable except in fringe situations.

  6. It’s almost a trope that any graduate student has heard – “Peer review isn’t great. But it’s the only thing we’ve got.” Is the author a rube?

    On a more productive note, there is a push to expand the use of “pre-emptive review.” Idea is a journal will arrange to review a research *plan* in advance, including methods and the hypothesis to be tested. If accepted, the journal agrees to publish the results (after additional review of the completed work), even if the hypothesis is not confirmed. There are many benefits, including preventing “positive results” bias, “p-hacking,” and the general benefit of getting feedback on your plan in advance. The practice is spreading in many fields, with demonstrable (if early stage) benefits to reproducibility and study quality.

    See this excellent interview by Sabine Hossenfelder of Dorothy Bishop, on the reproducibility crisis and attempts to address it across fields:
    https://www.youtube.com/watch?time_continue=166&v=v778svukrtU&embeds_euri=http%3A%2F%2Fbackreaction.blogspot.com%2F&feature=emb_logo

    And anything by Sabine (“Science Without the Gobbledygook”) is worth watching!

  7. 2cents: It’s true – we’re in the age of access and there is too much, too many data points, to be able as a human being to take it all in. And David Buss’s paper did a great job of looking at how evolutionary adaptations are barriers to “the truth” and “science”, within the smartest of people (= Psychologists.)
    Same is true of “Best Sellers” (books). There is no objective process and there Are too many. Who decides? Those in positions of power. And power now is subject to not merit (= strength, wit, talent, competence, skill, etc.); but a vote/clicks/likes/views, and DEI! Or Fashion? And that process is corrupt, too.
    Hang in there? Thanks for what you do, sir.

  8. Mastroianni’s grousing is wrong-headed for exactly the reason stated above in comment #4. Peer review is not designed to enhance research productivity or to create new ideas to displace older ones: it’s function is only to establish a base level of competence.
    Mastroianni’s laments are like complaining that the Merchant Marine Officers license system has not resulted in the discovery of any new continents since the 18th century.
    In addition, his statement that “Many peer-reviewed findings don’t replicate, and most of them may be straight-up false” is simply not true. “Straight-up false”? Maybe in certain social psychology fields, but not in physical and biological sciences.

    As for the lack of revolutionary findings in recent decades: how about the organization of
    the genetic material, with introns, which absolutely nobody expected? How about
    the discovery of new species, from mysterious marine bacteria and viruses to extinct members of the genus Homo, by means of PCR amplified DNA? Did Mastroianni notice the most recent Nobel Prize in Physiology or Medicine?

    Maybe Mastroianni’s article should have been subjected to some peer-review.

  9. Thanks for the post.

    I worked for several years in the editorial office of a respected scientific journal. I was already a strong believer in the peer-review system when I started that job, but my experience working at that editorial office made me appreciate just how crucial peer-review is when it is done right.

    Yes, peer-review is an imperfect system. Yes, peer-review is time consuming and commonly not an efficient process. But, just like it was already said above and in Professor Ceiling Cat’s post: what is the alternative? With the ever increasing amount of papers published everyday (it is so hard to keep up), what are we supposed to do? I can’t find the time to read all papers that I think may be interesting even when I limit myself to the papers published in the journals I trust…

    Anyway, I really wanted to thank you for this post: it is always great when you have posts on scientific findings and on the scientific process. They may get less comments than posts on other issues, but I wanted you to know that some of us do enjoy your posts on science itself.

  10. This article raised some questions for me, Professor Coyne addressed some, but let me ask two general questions.

    1. How are peer reviewers selected? Do they have pre-existing associations with the journal or is it more random? How is bias in the form of having a peer reviewer whose own work is in disagreement with the author’s guarded against? Like would Richard Lewontin ever peer review a paper by E.O. Wilson?

    2. What is it specifically peer reviewers are looking at to correct, especially when a lot of the papers will be introducing original research that don’t have a fact base yet? Is it just if the paper is well argued? Would it be necessary to have the peer reviewer be in the same field as the author then as opposed to just someone familiar with how argumentation works?

    1. The Associate editors or Head editor selects them, usually based on their expertise and previous experience with their ability to carefully look at a paper. When I was associate editor for two journals, I wouldn’t deliberately send papers to people suggested by the reviewer (clearly on the basis that they were pals and wouldn’t be critical), I don’t remember ever sending a paper to someone who was a scientific enemy of an author; that would generally be unproductive except if those opponents could possibly raise serious issues. Lewontin wouldn’t review papers by Wilson because Wilson worked on ant biology and Lewontin was a theoretical population geneticists.

      Reviewers look for many things. Is the question an important one? Did they overlook alternative experiments or explanations, did they give the proper caveats. Usually, the peer reviewer would indeed be in the same field as the author.

    2. With regards to 1) In the journal I used to work for, the editorial office did a lot to try and avoid conflicts of interest such as not inviting authors from the same institution and people who had been co-authors in previous papers from the same group. As for 2) it is also good to keep in mind that in the end, in most journals, the power to decide is with the editor, when they select reviewers they are simply seeking the expert opinion on the paper from peers, but they then have to evaluate the reviews, decide how to weigh them and take them into consideration and it is definitely in their power to disagree with the reviewers and raise issues of their own… Again, when it works well it is an excellent system.

  11. In materials physics, even the lowliest journal wants a two-week turnaround time for reviewing. Hard to do anything more than a thumbs up or thumbs down.

    1. This is endlessly annoying. And then the authors don’t see the reviews for another few weeks. What happens in the interim is a mystery.

  12. In the end, the only ones who have to answer for any fraudulent claim are the authors, not the journals or funding agencies. If somebody wants to publish false data, it is possible even with peer review.

    Scientist have the ultimate responsibility.

  13. I was an associate editor of a journal while I was in academia and, of course, I both published peer-reviewed articles and served as a peer reviewer many times. It is very time-consuming to review an article submitted for publication. Often there are data analyses that are prohibitively difficult to validate and, of course, one has to trust that the data exist at all. So, reviews are not always thorough. That’s an inherent limitation to the process.

    How thoroughly one reviews a submission is also a function of the number of articles that are waiting for your attention. Often submitted articles simply show up in the mail, with requests for review from the journal editor. If you are on the reviewer list for several journals, the demand can be overwhelming, leading of course, to more superficial reviews. It’s not a perfect system, but I can’t think of a better one. Allowing everyone to publish whatever one likes would create chaos. Doing so would, in effect, put every reader into the position of being the peer reviewer. There is simply too much out there for this to be practical.

    So, I still accept the concept of peer review.

    One thing that I would like to see go away is *blind* review, where the author never knows who the reviewer is. (Often one can figure it out even if the review is blind.) Blind review, like anonymous social media posts and “unnamed sources,” provides many opportunities for abuse. Blind reviewers will say things in a review that they will never say to someone’s face. I once received a review that had the following sentence: “We should not encourage another young thinker to megathink when there are too many megathinkers out there giving us too little that is believable.” So there! That’s not mean by modern standards, but it was simply mean in the early 1980’s when I was innocently trying to make my way in the world. After enduring that outburst, I always signed my reviews. Doing so even led to a few collaborations! Blind review needs to go (IMHO). *Double-blind* might be an improvement, as recommended by another discussant above. But I would err in the direction of more transparency, rather than less.

    I also had reviews that helped me a great deal. Once I submitted an article to Nature. After a few weeks I got a response in the mail indicating that Nature wanted to publish my article but they wanted me to cut its length in half! Impossible I thought. Well, it wasn’t impossible. I cut it in half and the final result that was twice as good!

    Lots of what I read could be cut in half and be twice as good—perhaps even what I’ve just written above.

    1. Something struck me by this :

      “Blind reviewers will say things in a review that they will never say to someone’s face. ”

      1. Most review is in written form.

      2. As such, there is no guarantee the PI did not merely delegate the task to post-docs.

      So while review might have a professor’s name in the signature, unless a special note is made clear about it – or if the prose or other clues say so – the signature only says what _laboratory_ or research group is responsible.

  14. I fully agree with him, peer review (in its current form) is close to useless. I (and others) have written a bit about it on several occasions but I’ll just put some of the main points here.

    1) Peer review fails to keep out bad papers because authors will just keep submitting until they eventually get published somewhere.
    2) Peer review seldom changes the conclusion of a study, so it’s mostly nitpicking over small issues.
    3) Peer review does not catch most errors in the first place!
    4) Peer review after the work has been done is a waste of time/money.

    There are solutions.

    1) Pre-registered studies where the methods and aims are peer-reviewed before the research is done. It will save time and money as that’s the stage where peer-review can actually make a difference.
    2) When registered studies are submitted as finished products, it’s quick to check that they match what was registered and put them online.
    3) Post-publication review can be used to find/fix problems and doesn’t rely on just 2-3 reviewers. Yes, many papers won’t get that post-publication review but the important ones will.
    4) There’s no need to rely on journals to judge quality (they fail at that task) but we can have trusted people and ratings systems that allow the quality of papers to be judged.

    There is so much broken in the scientific publishing and peer-review in its current form is one of those things. It’s also not integral to the scientific process. Peer-review has only existed since the 1970s or so. We can do better.

    Further reading on both problems and solutions:
    History of peer review (Jonny Coates): https://twitter.com/JACoates/status/1583822377498341377
    Is Peer Review a Good Idea? (Remco Heesen and Liam Kofi Bright): https://www.journals.uchicago.edu/doi/10.1093/bjps/axz029
    Four proposals for a more reliable scientific literature (Jason Bosch): https://www.sajs.co.za/article/view/4811
    Tedious publishing: Preregistration can help (Jason Bosch and Anoop Kavirayani): https://evidenceandreason.wordpress.com/2020/04/28/tedious-publishing-preregistration-can-help/
    How might science benefit from a world without journals? (Jason Bosch): https://evidenceandreason.wordpress.com/2019/10/26/how-might-science-benefit-from-a-world-without-journals/
    The triumphs of post-publication peer-review (Sophien Kamoun): https://kamounlab.medium.com/the-triumphs-of-post-publication-peer-review-abef40466164

    1. I think you make some very interesting points here. Implementing them would not only change the journal publishing process but would represent a significant overhaul to the entire scientific enterprise.

      The most disruptive idea above (and I mean that in a positive sense) is that of pre-registering studies. Doing that would keep a lot of garbage from even getting started. In many ways, this is what the granting process is all about today. Grant proposals—or at least the ones that I wrote and reviewed—included details about purpose, materials, and methods. It is a form of pre-evaluation. Pre-evaluation would probably have the effect of publishing research reports that are good enough garner grant support *or* those that are good enough and important enough for the researcher to go to the significant effort of justifying the work before it’s done. That would eliminate a lot of pot boilers, the many papers that are just slightly adjusted versions of earlier papers (to get the “productivity” count up) and other such dreck that gets out there.

      It’s a worthy and creative effort you are undertaking, Jason. I’ll have a look at the materials that you cite above.

    2. I disagree with most of this.

      “Peer review fails to keep out bad papers”: That is not its job. There will always be trash-bin journals, but articles that are relegated to them are seldom noticed or seen. Publication in a widely respected journal gives the reader some confidence that the article has no fundamental errors. Peer review is worthwhile for that reason alone, even if it did nothing else.

      “it’s mostly nitpicking over small issues.” Maybe, but small issues can be important for proper interpretation and replication. And sometimes the conclusions are affected. Reviewers’ and editor’s comments have frequently pushed me in new directions that I had not thought of. One of my editors, Francis Balloux, made a suggestion to connect my mathematical proposal to a classical genetics model, and this ended up being the most important result of the whole paper. And my own reviews of other peoples’ papers have sometimes caused the authors to retract the submission and start over, sometimes reaching the opposite conclusion from their original one.

      “Peer review does not catch most errors in the first place!” This is a dubious claim. In my experience, both as a reviewer and as a reviewee, most major logical and conceptual errors are caught by peer review. Data collection errors are easier to miss, and you might be right about those.

      “There’s no need to rely on journals to judge quality (they fail at that task)…” The data seems to support the opposite conclusion. Just compare the contents of a high-quality journal versus one of those predatory journals. There is no comparison between them. And the data shows that respected journals are more likely than trash-bin journals to detect and reject hoax papers.

      “…but we can have trusted people and ratings systems that allow the quality of papers to be judged.” That is a good description of peer review.

      All of my papers benefited greatly from peer review. Even when a reviewer was wrong, their comment would alert me that some part of what I wrote was unclear. Sure, the system is not perfect, but it unquestionably improves the course of science.

      1. Publication in a “well-respected” journal gives a false confidence that a paper has no fundamental errors. Papers in high-impact journals tend to have poorer quality methods (https://www.frontiersin.org/articles/10.3389/fnhum.2018.00037/full) and higher retraction rates (https://journals.asm.org/doi/10.1128/IAI.05661-11) than papers in lower impact journals. The quality aspect is concerning, the higher retraction rate is unsurprising to me. The scientific glam mags want to publish attention-grabbing results, which means results that are more surprising and unusual, and the more surprising it is, the more different it is to our expectations and the more likely to is to be wrong.

        Small issues can be important but if they aren’t making a major change to the paper’s conclusion then they aren’t making a major change to the message. It would be interesting to see how often reviewers cause the conclusion of a paper to change but I suspect it’s a very, very small number of cases.

        Peer review doesn’t catch errors. I’m sure it caught most errors that you were aware of but you weren’t aware of the ones that weren’t caught and that’s what matters. When errors were introduced to papers and sent to peer reviewers, a majority of introduced errors were missed (https://journals.sagepub.com/doi/10.1258/jrsm.2008.080062). Peer-review is also unsuited to catching major issues like fake data. Elisabeth Bik has made a name for herself finding manipulated images that all went through peer review. Most researchers are not trained how to systematically review and catch errors.

        Peer review is no judge of scientific quality. Even ignoring the failings of peer-review, it has a huge bias towards author status. One group sent the same paper for peer-review while either having the paper be submitted anonymously, with the name of an early-career researcher or of a high-profile Nobel prize laureate. If peer-review judged the quality of a paper, the same paper should be treated the same way. “We find strong evidence for the status bias: while only 23 percent recommend “reject” when the prominent researcher is the only author shown, 48 percent do so when the paper is anonymized, and 65 percent do so when the little-known author is the only author shown.” (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4190976) That can be addressed with blinding but a 42% gap in rejection rate shows that peer review is not a measure of a paper’s quality.

        1. “Peer review doesn’t catch errors. I’m sure it caught most errors that you were aware of but you weren’t aware of the ones that weren’t caught and that’s what matters. When errors were introduced to papers and sent to peer reviewers, a majority of introduced errors were missed.”

          Your first sentence is falsified by your second sentence. Some errors WERE caught. In my post I only claimed that “most major logical and conceptual errors are caught by peer review”. You expect perfection, but imperfection does not mean “useless”.

          “a 42% gap in rejection rate shows that peer review is not a measure of a paper’s quality.”

          Again, this is your fundamental mistake. You judge peer review as if it should be perfect. It is not. But it IS a measure of a paper’s quality, albeit a noisy one. Editors find trusted experts who evaluate the paper. This is the very thing that you “propose” to replace peer review.

          1. Sorry, I should have said, “doesn’t catch most errors.” I don’t expect it to be perfect but I also don’t think we can even call it good if it only catches a third of major errors.

            Perfection is not the goal. Just adequacy is the goal. A 23% rejection rate for one author and a 65% rejection rate for a different author but the same paper? I wouldn’t call that noisy, I’d call that completely unreliable.

            I’m not going to continue much more because I know Prof. doesn’t like long comment threads but I am curious how much variation you are willing to accept. The published studies I mentioned put peer review at finding 33% of the major errors and the rejection rate varying 42% depending on author and you consider that good. At what rate of detecting major errors and noisiness for quality would you become concerned?

            1. I also am running up against the rules, but finding 33% of the major errors is still doing an important service to improve science, and I wonder how “major errors” were defined. I need to read those studies before I can judge them.

              For rejection rates varying with author, the evaluation of those figures is not straightforward. One has to take into account the probability of making a major error given that the scientist is known to be a good one, versus the probability of making a major error across the whole population of scientists. Knowledge of the degree of expertise of the author does give some information about the quality of the work, and peer reviewers do often have to recognize that they are not as knowledgeable as some of the authors they are reviewing. Also, part of the peer-review process involves judgement of the importance of the work, not just its correctness. A reviewer might legitimately believe that a Nobel laureate might have better insight into a work’s importance than the reviewer himself or herself might have, Is this perfect? No. Bias due to author status is real, and can be problematic. But in the study you cite, the anonymized rejection rate was close to 50%, the toss of a coin. This suggests the article was neither rife with errors nor exceptionally good. Evaluation of such a paper might very well depend on minor factors such as author reputation. I wonder what would have happened if the paper were really good, or really bad. I bet the acceptance rate would not depend so much on author status in that case. That would be a better test of peer review.

              I agree completely with you about peer review’s inability to check data collection quality.

        2. “Small issues can be important but if they aren’t making a major change to the paper’s conclusion then they aren’t making a major change to the message.”

          Scientists carefully consider their conclusions prior to submission, because they know that their paper has to pass through peer review. The benefits of the peer review system kick in even before a paper is sent out for review.

    3. I disagree that the present peer review system is useless, but it can definitely benefit from some improvements. Especially your measure No 1. In fact , that is what’s being done for getting grants, just inform the journals to be targeted. I’d like a 5th measure: a data base of about all scientific journals, and open access to the reasons a paper was rejected. That would reduce ‘journal shopping’.

  15. Would anyone peer-review a paper they know is intended to be shared on social media?

    Just because it is shared on social media does not mean it would help if it was peer reviewed.

  16. bioRxiv (and related Rxiv for other fields) already exists. People post papers on bioRxiv before getting the paper accepted at journals. There is more access to non-peer reviewed papers now than there ever has been in the past.

    As to peer review . . . borrowing from Winston Churchill, peer review is the worst process for publishing papers, except for all the rest. I know of no other option for screening out bad or poorly written papers. Yes, some bad papers get through, but it’s a lot better than nothing.

    1. I’ve been trying for a couple of years to solicit good studies for my journal by contacting authors of bioRxiv preprints. Every preprint has turned out to be already under review at another journal. Preprints used to be a way to get feedback on a manuscript or study, but at bioRxiv it’s become mostly a way to establish priority in public while the manuscript is being reviewed in private at a journal. Not a bad thing, but a different purpose. At least that’s been my experience recently.

  17. This article is trash. Non-sequitur after non-sequitur. With every assertion that peer review hasn’t…
    increased productivity
    produced new ideas that overturn old ones
    resulted in more impressive Nobel Prizes
    identified or fixed the replication problem
    filtered out falsified data,

    I kept saying to myself ‘but that’s not what peer review is for!’ Peer review is merely a threshold of scientific quality standards for a given journal.

    His counterfactual is an N=1 anecdote of a slaphappy article getting more ‘likes’ than his PNAS paper? So now peer review is supposed to increase the attention a scientific paper gets? What if the jokey article just got more attention due to the novelty effect, that others shared it widely because they’d never read a journal article that made jokes in it and was still published so it went viral?

    Peer review sucks? Not really, crappy journals suck and crappy research sucks and crappy researchers suck. No doubt 30,000 journals guarantees a steady supply of excrement but this embarrasingly weak article sucks too with a painfully weak thesis supported by weak or irrelevant evidence and just-so conclusions. I would’ve rejected it, especially had it been written by a scientist.

  18. I am pretty cynical about this subject. I served on editorial boards, was an associate editor, published a couple dozen papers and reviewed many more. I used to take reviewing seriously, spending a lot of time on an honest and helpful evaluation. Then I’d get these ludicrous point-missing superficial reviews of my own work, which overworked editors would take at face value. Agravating.
    I’ll also add that publishing is no longer a way to avoid perishing if you’re not bringing in enough grant money. Teaching, published research productivity, departmental service, and student mentoring are all trumped by perceived inadequate fundraising.
    I’m a lot happier since I quit academia and became a mercenary field biologist. (But I do feel guilty sometimes about all my unpublished data.)

    1. That’s a much more candid and honest critique of academic publishing than peer review alone and I agree. Professors writing grants to do research (hugely important) are now treated like revenue streams. Many US universities siphon 70-90% of grant money from academic researchers’ grants in indirect costs, making them write grants more frequently to support their labs and do less research. How is that acceptable? With undergrad tuitions clocking in around $65K/year, colleges and universities have a massively dysfunctional operating model. With big schools endowments reaching values of $8-40 billion, how is squeezing science depts for cash justified? Not exactly linked to peer review but the funding issue is a root cause of much of the pain faced by academic researchers and it’s a tragedy.

      1. Add to that the scandal of academic publishers preying on scientists and their funding sources to squeeze out as much money as possible from them, while the real work of peer reviewing is done for free by busy professionals, with no compensation for time spent on the reviews. I think this is what is most wrong with the peer review system.

        1. Spot on Lou. The big scientific journal publishers bleed the scientific community with their price gouging. Reviewers should be compensated which would draw a lot of extra talented reviewers into the pool who would otherwise rather not give away their time for free. Imagine, $50/hr and reviewers themselves could then be better vetted for quality. I predict a journal’s standards would be better maintained.

  19. Why could scientists not learn from the best practices in software development? I do not see the point in competing for the favors of a few centralized authorities. Let users and groups establish their reputations for promoting and reviewing interesting material without gatekeepers. There is no point in having printed journals anymore, with their byzantine writing and formatting guidelines (I’m saddened by all the effort wasted for pretty-printing with LaTeX).

Leave a Reply