Monday, January 20, 2014

Wrap-up on last year's Science and The Economist special issues

In this post I will wrap up my discussion of the Science special issue on “Communication in Science: Pressures and Predators,” from last October, and the Economist special feature on “How Science Goes Wrong” (Oct. 19-25, 2013, issue). I have discussed various aspects of the Science special issue previously, particularly the open access “sting” operation by John Bohannon (with a follow-up here). I wrote about the excellent article by Jennifer Couzin-Frankel in the Science special issue, as well as the Economist special issue, here.

In this post I first want to summarize the salient points from the policy forum article in the Science special issue, by Diane Harley (2013). The article begins by discussing the potential for vehicles of communication other than the traditional peer-reviewed journal article. Social media technology, the open source movement in computer science, and crowd-sourcing movements such as Wikipedia illustrate the possibilities. The ArXiv preprint server and open access journals are specific manifestations within the scholarly community, along with less laudable developments such as bibliometrics for evaluating the quality of a researcher, a journal, or an institution. Harley's research has found, however, that the scientific community, including its youngest members, have been resistant to these new developments. The traditional peer reviewed article appears to be the least risky form of communicating research, particularly in view of funding, tenure, and promotion practices. Harley's study of 12 disciplines “revealed that individual imperatives for career self-interest, advancing the field, and receiving credit are often more powerful motivators in publishing decisions than the technological affordances of new media.”

The increasing deluge of publications, driven by the demands of funding, tenure, and promotion pressures, has resulted in an increased need for filtering research. The imprimatur of “good journals” is often used as just such a filter. Thus, the choice of where to publish is made based on three factors: prestige, time to publication, and visibility to a target audience.

Harley goes on to discuss how the final, peer-reviewed version of a paper receives the greatest weight, compared to preprints, working papers, conference papers, etc. She also discusses the lack of traction that experiments in open peer review have had, as well as the unfortunate ceasing of publication by two journals of supplementary data, due to the inability of referees to cope with reviewing such materials. Finally, alternative bibliometrics based on social media can too easily be gamed.

I've touched on a few highlights of the paper that caught my attention; the full paper is well worth reading and pondering. It provides a good airing of the tensions regarding scientific communication that the infrastructure of our profession will need to resolve.

Next, I will mention that the December 6, 2013, issue of Science published a selection of letters and online comments reacting to the “Communication in Science” special issue. The only one I want to point to is a letter by Lopez-Cozar et al., discussing how, as an experiment, they found that they were able to game Google Scholar by uploading fake documents. This is an example of the vulnerabilities of alternative bibliometrics that Harley alludes to.

The Economist also published a selection of letters to its “How Science Goes Wrong” special issue in the November 9, 2013, issue. There is not much for me to comment on there either, with one exception. Professor Stuart Firestein, a Columbia University biologist, wrote a fairly critical letter. He writes, “Demanding that scientists be sophisticated statisticians is as silly as demanding that statisticians be competent molecular biologists or electrophysiologists. Both are professional abilities that are not likely to be mastered by the same people. I agree that every laboratory should have the services of a professional statistician, but that is a luxury available at best to a few wealthy labs.”

I think Firestein is missing the point, albeit the Economist did not do a good job of making the point I want to make. The most important contributions of statistics is not in statistical methodology, but in critical thinking. Much of that critical thinking is non-mathematical in nature, and I believe it could be taught to lay scientists by lay scientists. Unfortunately, even in statistics courses taught by professional statisticians, the kind of critical thinking I am speaking of is often absent. I shall seek to make this point more fully in another forum.


Reference


 
Diane Harley, 2013: Scholarly communication: cultural contexts, evolving models. Science, 342: 80-82.



No comments:

Post a Comment