Apparently we are ending this year's "Peer Review Week". Many publishers are celebrating the work of their peer reviewers, who are typically unpaid volunteers that support the review of articles submitted to academic journals, grants proposals submitted to funding agencies, and occasionally papers submitted for presentation at conferences.
For those of us who work in non-academic institutions, like private sector or government organizations, there is often a parallel internal peer review process for employees, before they are allowed to submit to an external journal or conference. There are notable differences between this internal peer review process and the academic reviews I refer to above; the rest of this post will focus solely on the academic sector review.
Peer review has taken a great deal of criticism in recent years. I personally am sympathetic to the view of Adam Mastroianni, who argues that no peer review is better than (at least conventional) peer review. Post-publication peer review (such as on PubPeer) is, for me, a preferred way for a community to assess the merits of a scientific publication. Peer review, as conventionally practiced, does not do what it claims to do, confer an imprimatur of quality and approval upon a scientific publication. There is no limit to the stories of terrible papers published in top journals that contain almost blindingly obvious flaws, along with excellent papers that were repeatedly rejected by outstanding journals. A defender of peer review may argue that surely the false positive and false negative error rates must be low. It is impossible to verify such a claim because the peer review process is shrounded in anonymity, and publishers are reluctant to release any data regarding the performance of peer review at their journals.
I don't see anything of value that pre-publication peer review actually create that cannot be duplicated, in a better way, using post-publication peer review.
A thought-provoking podcast given by Dr. Mastroianni can be found at Econtalk (along with a reference to his substack article on the topic). I won't rehearse all his criticisms here - he does it more eloquently than I could.
One issue not addressed by him is the inclusion of ad hominem attacks on the authors of manuscripts in reviewer reports. The U.K.'s Institute of Physics Publishing (IOPP) released last week a campaign against such unprofessional comments in peer review. This podcast is an extremely interesting discussion of the topic, and the proactive ways IOPP is using to address it. IOPP also rates each review they receive, and is now willing to share this feedback with reviewers who request it. It's about time someone reviewed the reviewers! (I may well decide to take the peer review training course offered by IOPP if time allows.)
I've been on both sides of peer review on many occasions. I've had experiences where the peer review process clearly improved my manuscript, and I've had experiences where the process clearly degraded the manuscript, that is, the published version is notably worse than the original, in my not-so-humble opinion. I've experienced a few cases where reviewers of my work have signed their reviews, i.e., revealed their identities; most of the time they are anonymous, but in a handful of cases, I believed I could guess who some of them are.
I've also heard from others who've had their ideas stolen by peer reviewers!
On the other hand, in my highly biased opinion, every peer review I have performed for others has resulted in improvements in their manuscripts. It is notable that every article I have ever peer reviewed has eventually been accepted for publication. On the other hand, I've never signed a review, so I've always been anonymous to the authors I've reviewed.
Finally, in a handful of cases I've posted non-anonymous post-publicaton peer reviews on PubPeer. And sometimes when I wonder about the legitimacy of a published paper, I'll check up on it in PubPeer. I advise readers to do the same.
No comments:
Post a Comment