Since beginning my exploration of peer review last year, those four dirty little words have constantly occupied my computer screen. They seem to be everywhere, popping up in news articles and Twitter feeds alike as every “expert” weighs in on the unmitigated disaster that is academic publishing in the age of the Internet. What started out as a basic fact-finding mission, spurred on by some unexpected inquiries about the magazine (no, IMMpress Magazine is not a “peer-reviewed” publication), quickly devolved into an all-out inquest as it became apparent that the ongoing dilemmas with funding, graduate education, academic employment and science’s public reputability are inextricably tied to the well-documented shortcomings of the peer review system. In light of this, I have decided that this article, once meant to be a quick delightful summary for those not in the know, will instead attempt to thoroughly lay out the facts (and the occasional tentative opinion) about peer review, publishing, and the problems to which researchers can no longer turn a blind eye.

THE HISTORY BOOKS FORGOT

For the journals of the 18th and 19th centuries, a standardized review system was neither needed nor desired. Many of these early journals were created specifically to showcase the work of a particular research institute or medical society; as such, the decision of what and when to publish rested solely with the editor or an internal committee. Not only was there little incentive for timely publication or quality control, but there were often more journal spaces available than articles to fill them. Furthermore, prior to the invention of the Xerox photocopier in 1959, the production of additional manuscripts to send off for external revision was both painstaking and expensive, making the process far more trouble than it was worth.

Following the two world wars, however, scientific research took on a new bent. With an unprecedented influx of highly specialized papers, editors could no longer rely on their own knowledge to make the now necessary judgement calls about quality and importance, forcing them to look further afield for “peers” with the relevant expertise. Meanwhile, the increased reliance of research on public funds mandated a certain amount of transparency and regulation to ensure that government money was being well spent. As a result, most journals adopted some version of what is now considered the traditional model of peer review by the mid to late 20th century. In this system, the chief editor for a journal acts as the first round of quality control, immediately rejecting articles with glaring errors or otherwise unsuitable for the journal. Manuscripts that pass this vetting process are sent to an associate editor and several specialists in the field for review. These individuals assess the article’s content, the level of interest in the subject matter and the overall presentation, and weigh in on whether or not it should be published. Based on these recommendations, the chief editor then makes the final call — accept, revise, or reject.peer review quotes_webTROUBLE AHEAD, TROUBLE BEHIND

The deluge of literature retractions over the past few years makes it seem like peer review’s failings have only recently come to a head. However, the difficulties that inevitably arise in a system maintained by scientists for the purpose of policing their own work have been the subject of discussion in the academic community for decades. The First International Congress on Peer Review in Biomedical Publication, held in 1989, discussed everything from the unequal distribution of the reviewing workload to the risk of suppressing innovation to the hard proof that the peer review process was susceptible to bias. Going back even further, Ernest Hart, editor of the British Medical Journal in 1893, remarked that “it [peer review] is a laborious and difficult method, involving heavy daily correspondence and constant vigilance to guard against personal eccentricity or prejudice, or — that bugbear of journalism — unjustifiable censure.”

Over a century later, those words still ring true. Despite the significant advances modern scientists have made over their forebears when it comes to communication, the current peer review system continues to struggle with gross delays in manuscript turnover. It is not unusual for papers submitted to pre-publication peer review journals to be recycled within the system for up to a year, and if the final verdict is rejection, the entire process of submission, review and revision starts anew at another journal. Part of the reason for the delay stems from the difficulty in recruiting enough qualified reviewers to keep up with the surfeit of publication-hungry researchers; because many review processes remain anonymous (and unpaid), there is little incentive for active scientists to spend their precious time reviewing manuscripts.

TOO GOOD TO BE TRUE

Peer review may be bogged down, but the pressure to deliver neat and tidy answers for public consumption shows no signs of abating. Factor in the need for “high impact” publications to land a job (or a grant), and it is little wonder that more and more authors, editors and reviewers have resorted to manipulating the peer review system in the hopes of lessening their load and improving their odds. Some of this subterfuge comes in the form of fabricated data — clinical trials that preferentially report positive results, western blots that have been spliced together in Photoshop, or flow plots that have been “beautified” through the removal of unseemly outliers. While good reviewers can sometimes spot these suspicious data in advance of publication, peer review works on the basis of trust between members of the scientific community and the process is not designed to detect intentional deceit. To combat these inherent weaknesses in the system, many journals now have strict criteria regarding image manipulation and encourage the provision of original datasets to allow for independent confirmation of analyses; however, these standards are not universal and falsified results can still slip through the cracks. At other times, it is the reviewers who abuse the trust implicit in peer review by plagiarizing novel methods or findings from a manuscript submitted for review and then publishing them in another journal.

Undoubtedly, the most flagrant and alarming occurrence of peer review fraud has been the emergence of “peer review rings.” Since this phenomenon was first exposed a few years ago, at least 250 papers, many in journals run by big name publishers (Springer, Elsevier, BioMed Central, SAGE), have been retracted due to evidence of fake or unscrupulous peer review. Several of these rings centered on a single author who rigged the process such that any requests to review their manuscript would be directed back to them or to complicit colleagues. For others, third-party organizations that charge authors a fee to have their manuscripts revised and submitted for publication were responsible for faking peer review, often without the authors’ knowledge. In both cases, the co-opting of the peer review process was made possible by software that allows, even encourages, authors to suggest reviewers for their manuscripts. This software has been a great boon for journals as it saves editors considerable time tracking down the relevant experts for every paper that passes their initial evaluation. However, the ease with which the system has been duped reflects the lassitude of some journals when it comes to checking referee credentials — in many cases, red flags such as non-institutional email addresses or rapid review turnarounds were missed. As a result, many journals are now either returning to the practice of requesting reviewer suggestions through cover letters or are eschewing it altogether.

SHE HAD A NAME

Image via Pexels.com.
Image via Pexels.com.

If you haven’t heard about #AddMaleAuthorGate by now, I would strongly suggest plugging it into your nearest search engine. Along the lines of the recent Tim Hunt scandal (#distractinglysexy), this snappy hashtag refers to the comments made by an anonymous PLoS ONE reviewer on a manuscript examining the difference in publication rates between male and female PhD students. Proving once again that sexism is alive and well, the reviewer not only suggested that the two female authors might benefit from having a male co-author on their studies, but also proposed that the discrepancies in publication rates between the genders could be due to men working “more hours per week than women, due to marginally better health and stamina.” Sadly, this situation is neither unique nor surprising. Numerous studies have been done showing that reviewers of both genders unconsciously undervalue the achievements and capabilities of women, especially in male-dominated fields. Given that this bias translates into a need for women to publish more than men to be seen as equivalently competent when applying for grants and postdoctoral fellowships, it becomes clear why comparatively few female graduate students make it to the upper echelons of academia.

While scenarios like the one above receive the most attention in the press, sexism does not have a monopoly on bias in peer review and publishing. There is strong evidence that an author’s presumed ethnicity and the perceived prestige of their affiliated institution directly impacts reviewers’ opinions on the quality and importance of their work. Many scientists have also observed over the years that the literature is skewed against research ideas that challenge current paradigms, against replication studies, and against studies that report negative results — all of which are needed to paint a full picture of scientific knowledge and to drive research towards practical applications.

COLLABORATE AND LISTEN

If peer review is so broken, why bother with it at all? Some scientists believe that we shouldn’t; after all, despite widespread faith in the practice, studies of peer review consistently fail to find any empirical evidence that it prevents fraud, reliably detects honest scientific errors, improves the quality of the literature or accurately predicts the importance of a finding to the field. Nevertheless, I share the opinion of many others that despite its flaws, peer review in some form is still necessary. A system of checks and balances will always be important in science because new avenues of thought (along with the funding and public policies that go with them) are built on existing knowledge. If that foundation is less than stable, both science and the public trust it requires to function are jeopardized.

Some of the peer review methods that exist pre- and post-publication. Graphic compiled by Kieran Manion. Image credit: Florian Klauer via StockSnap.io.
Some of the peer review methods that exist pre- and post-publication. Graphic credit: Kieran Manion. Image credit: Florian Klauer via StockSnap.io.

Luckily, the expansion of online publishing in the past decade has forced academia to develop peer review processes that can keep pace with the transfer of information in the digital age. For pre-publication peer review, blinding reviewers to the identities of submitting authors can help reduce bias, while employing transferable peer review and continuous publication greatly reduces the time required to review manuscripts. After many years of success in physics and mathematics, pre-prints are also becoming popular in the life sciences as a way to ensure that data makes it into the scientific literature regardless of whether or not it is officially published. Going a step further, there are now many online, open access platforms (see graphic) that serve as alternatives to formal publication altogether and instead employ post-publication peer review. In this model, reviewing takes on a more “journal club” format that promotes the transparent discussion of data, methodologies and conclusions, facilitating replication studies and allowing for dynamic revision of the posted content by a broader selection of the scientific community.

SCIENCE GETS DONE

At its core, peer review is a way to ensure that the findings put forth by the scientific community are rigorously derived, clearly explained, reasonably novel and amenable to practical application. Is peer review truly broken? I leave that for you to decide.


References:

  1. Burnham JC. The Evolution of Editorial Peer ReviewJAMA 1990; 263(10): 1323-1329.
  2. Chapelle FH. The History and Practice of Peer ReviewGroundwater 2014; 52(1): 1.
  3. Ferguson C, Marcus A, Oransky I. Publishing: The Peer-Review Scam. Nature 2014; 515: 480-482.
  4. Horrobin D. The Philosophical Basis of Peer Review and the Suppression of InnovationJAMA 1990; 263(10): 1438-1441.
  5. Rennie D. Editorial Peer Review in Biomedical Publication. JAMA 1990; 263(10): 1317.
  6. Rethinking Peer ReviewThe New Atlantis 2006; 13: 106-110.
  7. Spier R. The History of the Peer-Review ProcessTrends in Biotechnology 2002; 20(8): 357-358
The following two tabs change content below.

Kieran Manion

Design Director
Kieran Manion is a senior PhD student studying the breakdown of B cell tolerance in systemic lupus erythematosus in the Department of Immunology at the University of Toronto. In her spare time, she practises using digital platforms for general artwork and graphic design.
Previous post CSI Goes to the Prairies
Next post Driven By Discovery: Alumni Interview with Brad Jones

Leave a Reply

Your email address will not be published. Required fields are marked *

Close

Feed currently unavailable. Check us out on Twitter @immpressmag for more.


Sponsors