Last week in the online magazine Physics, psychologist Simine Vizire published a call for peer review reform. Hers is not the first, nor will it be the last. My personal experience with peer review is mixed and I would support calls for reform. In discussions with other scientists, I have heard appalling examples of abuse, such as the stealing of authors' work and credit by anonymous reviewers.
All the manuscripts I have submitted for peer review have benefitted from the process. The published manuscripts are usually improved as a result of the vetting, and in most cases I prefer the published version over the initially submitted draft. However, in two cases the published version is also shorter or more condensed than I would have preferred, and 19 and 12 years later, respectively, I'm still sore about that. In one case, the peer review process extended for over a year, in my opinion needlessly, due to a single obstinate reviewer who added absolutely no value to the process. In another case I eventually published a manuscript in unrefereed form (in a conference proceedings) because the journal rejected it for being "too simple". (The talk corresponding to that paper won an award at the conference where I presented it!) I have since pulled the same trick for a second paper, also rejected, though it gets lots of reads where I posted it (academia.edu). I strongly believe that the taste of referees and editors does not necessarily correspond with that of readers.
As a referee, I also feel that I strongly improved every manuscript I have refereed. However, I have seen other published research that should have been better vetted. I have taken to posting critiques of such papers on PubPeer, a website for open, post-publication "peer review". In my view, tools like PubPeer should be better integrated into the research culture. Upon reading a paper that you think you may want to use, one of your first instincts should be to look it up in PubPeer. And scientists should not hesitate to post their criticisms or discussion of published work on PubPeer.
Vazire makes some pertinent points. We don't understand the performance of peer review because there is essentially no empirical research done on this (with the exception, she says, of medical journals) - a hypocrisy considering scientists pride themselves on being "evidence based". She calls for giving peer review researchers access to now-confidential materials, the raw material of peer review, so as to better study the process. (It would be a brave editor who agrees to such sharing!)
Other issues she complains about (status bias and bias due to the gender, institution, or ethnicity of authors) could be addressed by double blind peer review, a process I have participated in myself. She calls for more diversity in recruiting referees, though the flip side of this is placing additional (unpaid) labor burden on minorities, who already bear an additional burden at their institutions in many cases. I agree with her that mechanisms to hold editors accountable for their own abuses are nowhere near sufficient.
I support Vazire's recommendation of disclosure of referee reports and decision letters; but such documents are taken out of context without the corresponding draft versions of manuscripts. Thus the use of preprint servers and tools like GitHub could be used in conjunction with transparency of peer review.
Vazire also requests specialized review processes; statistical analysis is her example. In principle this is a sound suggestion, but in practice I frankly believe statisticians might do both harm and good in this role. Much of the damage to the research enterprise in the first place has been due to the influence of the statistical inferential framework, as I've written about previously.
Peer review is a broken process, but it should be reformed instead of discarded. Statistical inference too is broken, but we've given statisticians a chance to reform it, to no avail. Discarding it in most cases would be a step forward.
No comments:
Post a Comment