Nature published a superb commentary yesterday by Glenn Begley, Alistair Buchan, and Ulrich Dirnagl, advocating reforms among institutions to help reduce irreproducible research. I have little to add except unabashed praise. DTLR endorses the views expressed in the commentary.
Wednesday, September 2, 2015
Sunday, August 30, 2015
The AMS statement on weather analysis and forecasting
Following up on my last post on meteorology, I want to draw readers' attention to the American Meteorological Society's information statement on weather analysis and forecasting, which appeared in the Society's Bulletin recently, and can be found online here. I found it to be very informative.
Tuesday, August 18, 2015
Dr. Marshall Shepherd's weather pet peeves
Check out Dr. Marshall Shepherd's blog post, "10 Common Myths and Misconceptions about the Science of Weather" at Forbes.com. He is a distinguished professor at the University of Georgia and a former president of the American Meteorological Society.
Sunday, August 9, 2015
Self-correction and an open research culture in science
Back in the June 26 issue of Science, there was a pair of commentaries by Alberts et al. (2015) and Nosek et al. (2015) entitled, respectively, "Self-correction in science at work" and "Promoting an open research culture." Both articles are reactions to concerns about non-reproducible research. The first one focuses on incentives, investigations of misconduct, and ethics education. The second article deals with transparency, and introduces a set of guidelines called TOP (Transparency and Openness Promotion). It discusses various standards for transparency, with different levels. While I do not think this pair of articles captures the whole scope of the irreproducibility crisis, they do provide much food for thought on how we should address it. Thus I recommend both articles to DTLR readers.
One passage in Nosek, et al. (2015), discussing the preregistration of studies and analysis plans, caught my eye. "Preregistration of analysis plans certify the distinction between confirmatory and exploratory research, or what is also called hypothesis testing versus hypothesis generating research. Making transparent the distinction between confirmatory and exploratory methods can enhance reproducibility." I fully endorse this perspective. I have met scientists who fail to understand the point made here so succinctly, and it really deserves emphasis.
B. Alberts, et al., 2015: Self-correction in science at work. Science, 348: 1420-1422.
B. A. Nosek, et al., 2015: Promoting an open research culture. Science, 348: 1422-1425.
One passage in Nosek, et al. (2015), discussing the preregistration of studies and analysis plans, caught my eye. "Preregistration of analysis plans certify the distinction between confirmatory and exploratory research, or what is also called hypothesis testing versus hypothesis generating research. Making transparent the distinction between confirmatory and exploratory methods can enhance reproducibility." I fully endorse this perspective. I have met scientists who fail to understand the point made here so succinctly, and it really deserves emphasis.
References
B. Alberts, et al., 2015: Self-correction in science at work. Science, 348: 1420-1422.
B. A. Nosek, et al., 2015: Promoting an open research culture. Science, 348: 1422-1425.
Saturday, June 13, 2015
Some nice physics in the popular press
I was delighted to see a couple of physics items in the popular press. Last week the New York Times had this opinion piece by physicists Adam Frank and Marcelo Gleiser, "A crisis at the edge of physics." There isn't much new here for DTLR readers, as last year we've covered similar ground in our discussion of Jim Baggott's book, Farewell to Reality. Helen Quinn's term "scientific metaphysics" (see my previous post) might be a good one to use to describe what Baggott less charitably calls "fairy tale physics". Nonetheless I take a dim view of efforts to de-emphasize the importance of empirical testing. Such efforts smack of wishful thinking, in my view.
Then Forbes had a piece by physicist Chad Orzel defending his choice to work in atomic and molecular physics. Titled "Particle and astro aren't the only kinds of physics", it is absolutely delightful, except for the swipe at biomedical researchers near the end! Of course, condensed matter physics is the largest sub-field of physics, but it does seem true that particle physics, astrophysics, and cosmology are the branches of physics that get the most traction in the popular media. Physics needs more evangelists like Orzel to call attention to the many and varied other subfields of physics.
Then Forbes had a piece by physicist Chad Orzel defending his choice to work in atomic and molecular physics. Titled "Particle and astro aren't the only kinds of physics", it is absolutely delightful, except for the swipe at biomedical researchers near the end! Of course, condensed matter physics is the largest sub-field of physics, but it does seem true that particle physics, astrophysics, and cosmology are the branches of physics that get the most traction in the popular media. Physics needs more evangelists like Orzel to call attention to the many and varied other subfields of physics.
Science is done in an artificial environment
I would like to recall a Physics Today "Reference Frame" column from a number of years ago by distinguished particle physicist, Helen Quinn (2009), "What is Science?" This is a particularly philosophical piece, which among other things introduces the term "scientific metaphysics" to deal with extrapolations of scientific theory and speculation into regimes that are not, in principle, empirically testable, such as the" many worlds" interpretation of quantum theory.
However, another part of the article calls my attention today. I would like to quote an entire paragraph:
I've long agreed that "Science is done in an artificial environment," and I have gravitated towards applied science, partly as a result of discomfort with the ivory tower nature of basic research. The aspect of science that Quinn describes above is one that has not been emphasized much by others, in my experience. However, I think it is important for both scientists and non-scientists to understand this point, and Quinn puts it more eloquently than I could have.
I would only disagree with the gist of the final sentence. In medicine, much of the flip-flopping conclusions that Quinn speaks of are due to the prevalence of studies based on observational, not experimental, data. Lacking any attribution to causality, such studies really do give the impression of changing conclusions, which is detrimental to both scientists and non-scientists alike.
Helen Quinn, 2009: What is Science? Physics Today, 62 (7): 8-9.
However, another part of the article calls my attention today. I would like to quote an entire paragraph:
Science is done in an artificial environment, where its logic can develop without a need for immediate action. That unnatural environment allows science to yield powerful and unexpected new options for eventual action. It is important to note, however, that some applications of science, such as medicine, cannot wait until all questions are resolved. Medical practice can be based on the best available scientific knowledge and theory, but it must often apply them in untested regimes. Much of the public's feeling that science is always changing its conclusions comes from changes in medical advice that occur when new scientific knowledge overrides the previously best guesses of medical practice.
I've long agreed that "Science is done in an artificial environment," and I have gravitated towards applied science, partly as a result of discomfort with the ivory tower nature of basic research. The aspect of science that Quinn describes above is one that has not been emphasized much by others, in my experience. However, I think it is important for both scientists and non-scientists to understand this point, and Quinn puts it more eloquently than I could have.
I would only disagree with the gist of the final sentence. In medicine, much of the flip-flopping conclusions that Quinn speaks of are due to the prevalence of studies based on observational, not experimental, data. Lacking any attribution to causality, such studies really do give the impression of changing conclusions, which is detrimental to both scientists and non-scientists alike.
Reference
Helen Quinn, 2009: What is Science? Physics Today, 62 (7): 8-9.
Thursday, June 4, 2015
The grubby details of reproducing research
This week in Nature, Richard van Noorden writes of some preliminary findings of the Reproducibility Initiative: Cancer Biology, presented at a conference in Brazil. It is an interesting report, though I feel somewhat uncomfortable about the meta-analysis type statistical significance measure that they plan to calculate. Nonetheless, the effort seems a worthy one, and DTLR awaits the final results.
Tuesday, May 19, 2015
More pieces of the nonreproducibility puzzle
A couple of recent news features have shed light on pieces of the reproducible research puzzle. Back in February, Jill Neimark in Science wrote about contaminated cell lines. And just this week, Monya Baker in Nature wrote about batch-to-batch variation and non-specificity of antibodies. Cells and antibodies are workhorses of modern biological research, and growing attention is needed to these potential sources of error. I commend both of these articles to DTLR readers.
M. Baker, 2015: Blame it on the antibodies. Nature, 521: 274-276.
J. Neimark, 2015: Line of attack. Science, 347: 938-940.
References
M. Baker, 2015: Blame it on the antibodies. Nature, 521: 274-276.
J. Neimark, 2015: Line of attack. Science, 347: 938-940.
Sunday, April 19, 2015
Science funding in the American political framework
A social contract model
The 16-day government shutdown in the
United States, in October 2013, had a pronounced effect on federal
funding for scientific and medical research. Writing in Nature,
Daniel Sarewitz found the event a good opportunity to reflect on the
role of taxpayer-funded scientific research in this country. I will
similarly indulge in such reflection here. Sarewitz emphasizes the
need for a tangible payoff from such research. I agree with that
point, since in my view a social contract exists between the people
who fund research, particularly the taxpayers, and those who carry it
out.
In this month's issue of Eos, William
Hooke discusses this social contract more eloquently that I could.
Hooke realizes that much of the taxpayers' financial support “comes
from people far more strained financially than we are,” as most
mid-career and senior scientists belong comfortably in the middle
class. Hooke wants us to think about the following questions: “Why
should they pay us? Isn't it because they hope that our labors will
improve their lot in life? Don't we owe them something? What would
a fair return on society's investment look like?”
In the U.S., the social contract
originated in World War II, and the terms of the contract were
tantamount to “give us lots of money and don't ask too many
questions, and one day you'll be glad you did.” As Hooke points
out, for most of the post-war period, this setup was wildly
successful. However, Hooke points to numerous stresses on the
contract that have emerged in the last decade, and he is not
impressed with the scientific community's response: a turn to
political advocacy. “Worse, we've too often dumbed down our
lobbying until it's little more than simplistic, orchestrated,
self-serving pleas for increased research funding, accompanied at
times by the merest smidgen of supporting argument.” In the
geosciences, Hooke claims that “we've allowed ourselves to turn
into scolds. Worse, we've chosen sides politically, largely
abandoning any pretense at nonpartisanship.” Hooke says that the
outcome is “alienating at least half the country's political
leadership—and half the country's population.” He counsels as
follows: “As individuals and as a community, let's listen more to
the people and the political leaders who support us and spend less
time up front telling them what we know. Relaying our knowledge can
come later; we first need to build a bridge of trust that can carry
the weight of truth.”
I am largely in agreement with the
comments by Sarewitz and Hooke cited above; I found Hooke's piece
particularly compelling. Let me take the opportunity here then to
outline my own views on the social contract between publicly funded
scientists, taxpayers and voters, and the political representatives
in between. My thoughts are motivated by two seemingly contradictory
premises.
Premise 1: Basic research deserves a mostly hands-off approach
Basic research is driven by curiosity
and serendipity, and therefore cannot be managed in the same way as
goal-driven, applied research. The payoff for society is almost
never obvious, immediate, or predictable, yet basic research has a
strong track record of producing such payoffs in the long run, albeit
with a high error rate. (Not every piece of research will result in
a payoff.) Hooke's account is right: “give us lots of money and
don't ask too many questions, and one day you'll be glad you did.”
However, I think that outside pressure
is indeed needed on the scientific community on at least one issue:
non-reproducible research. Such “research” represents a waste of
society's resources as well as an insult to the goals of scientific
endeavor. The scientific community has not been quick to face up to
the problem, which is intimately bound up with flaws inherent in the
community's infrastructure: its reward system for funding, tenure,
and promotion. This is one arena where outside pressure from
taxpayers and political leaders, would be welcome by me,
despite the predictable pushback from within the scientific
community.
Premise 2: Scientists are never “entitled” to funding
Taxpayer funding for research makes use
of the coercive power of government to seize a share of individuals'
income for redistribution. This means that the government knows how
to spend that share of money better than you would, and it doesn't
trust you to make the right decision. Taxpayers do have limited
control over the government's wisdom, when they function as voters,
but this limited control is very weak.
I am influenced here by the work of the
public choice school of economics. Scholars of that stripe point out
that in democracies, voters choose candidates or parties that
represent a portfolio of policy stances across the entire spectrum of
political decisions that a nation needs to make. A candidate or
party's stance on the deployment of taxpayer-funded scientific
research is usually a very small piece in that portfolio. That piece
is often drowned out in political campaigns that focus on more
visible issues to the voting public. This severely limits citizens'
direct voice to government on the level and distribution of their
money to scientific research. Secondly, on any particular issue
(like science funding), citizens' interests are usually more diffuse
than those of special interest groups, who have a large and specific
vested interest in the government's decisions, and thus are able to
mobilize large amounts of money to influencing both policy makers
(elected or otherwise) and the voting public. The amount of such
money is still dwarfed by the money the government has to spend, as
well as the economic consequences of that spending for the special
interests. This makes such lobbying a worthwhile investment on their
part. Thus, at least in the U.S., our democratic form of government
severely reduces the role of the citizen/voter/taxpayer in
influencing how much public money goes to scientific research, and
how that money is allocated.
Scientists should not maintain an
attitude of entitlement. Their funding is not a direct outcome of
voter support, and the survival of that funding is subject to
competing pressures from other special interests. Scientists should
reflect on whether they (as members of the middle class) deserve
other people's money to carry out research of their own choosing,
while others in society, such as children in families below or
straddling the poverty level, are deprived of even the opportunity to
advance out of the conditions in which they were born. Gratitude for
the persistence of Premise 1, the hands-off approach to funding basic
research, should prevail instead of an elitist sense of entitlement.
Private funding for science?
I'd now like to discuss a blog post by
accomplished physicist Sabine Hossenfelder, from January, 2013. In
this post, she provides a very thoughtful and considered discussion
of the role of private funding for science, either from wealthy
philanthropists, or through crowdfunding, and in contrast with
public funding. I will have to disagree with much of what she has
written. However, note that some of the disagreement may be due to
larger differences in culture and government between Europe (where
she resides) and the U.S. (where I do). The discussion below will be
from an exclusively American perspective, and I will not claim that
my conclusions would apply to countries with different cultural norms
and political systems.
First, let me list the points where I
agree with Hossenfelder, though I will express them differently and
perhaps with my own spin on them. (Thus don't take the following as
a literal reporting of her views, but rather my own perspective on
certain of her views with which I find favor.) Quotation marks are
used to indicate words or phrases from her post.
Much of scientific research is in
“unglamorous” areas that lack public visibility, and the nature
of such research (let alone any potential tangible benefit for
society) may be very difficult to communicate. Moreover, the
“dramatically high failure rate” may make the cost-benefit
trade-off seem difficult to justify. For this reason, much
scientific research will fail to attract private funding. (Private
donors like the Gates Foundation are often interested in measuring
the impact of their giving; this will be hard to do for blue sky
research.)
I believe that even the patron saint of
capitalism, Adam Smith, would have recognized that funding basic
research is a legitimate function of government, because of an
inherent market failure. Basic research benefits everyone equally,
not any particular investor who wants a competitive advantage, and
the payoff is usually too long-term and too unpredictable for a
private investor's taste.
“Wealthy donors often drive their own
agenda.” They can have a disproportionate influence on the kinds
of research that get done, with respect to the priorities of others.
“We have a lot to lose in this game if we allow the vanity of
wealthy individual[s] to influence what research is conducted
tomorrow.”
Now, to the points of disagreement. My
biggest impression is that Hossenfelder's piece conveys a sense of
entitlement. She works in an area of physics (quantum gravity) that
she says is “constantly underfunded,” and she finds essentially
no sources of private funding in her country that target her specific
field. However, she strongly believes in “the relevance of my own
research” and expects the same of any scientist. This is only
natural. The only problem I have is that nobody is entitled to spend
other people's money. They certainly are not entitled to use the
middleman of government to coerce non-scientists, who may have no
interest in my research, to fund a research program that I defined
simply because I am interested in it. To support Hossenfelder's
position, I would have to assume that I am smarter than the
taxpayers, and that the government should seize a portion of their
income to support my research, which is more worthy than what the
citizens would have spent that money on otherwise. We need to make the case that financing basic physics research is good for all of society, in some abstract sense, not just good for physicists.
Secondly, Hossenfelder values the
stability of government funding as opposed to the potentially
transient nature of private funding. “One of the main functions of
governmental funding of basic research is its sustained, continuous
availability and reliability.” In the United States, this just
isn't true. First we saw the sequestration, which created a massive
strain on the research enterprise across the country. Then there was
the government shutdown itself. The adverse consequences of both the
sequestration and the shutdown have been discussed at length by
others. The bottom line is that in the United States, public funding
of science has become highly unstable and unpredictable, no less so
than funding from private sources.
Third, Hossenfelder states that the
“Interests of wealthy individuals can affect research directions
leading to an inefficient use of resources, leaving essential areas
out of consideration. Keep in mind that the relevant question is not
whose money it is, but how it is best used to direct investment of
resources into an endeavor, science, with the aim of serving our
societies.“ This implies that some kind of optimal distribution of
research funds exists or can be found. I do not agree with that
implication. Funding for science is based on choices made by
individuals. Individuals may choose how to spend their own money
(e.g., wealthy donors) or they can choose how to spend the taxpayers'
money (politicians and funding agency bureaucrats). There is no
objective “right answer” to how such choices should be made; each
chooser makes his or her own cost benefit tradeoff calculation,
ostensibly on behalf of society. The funding agency bureaucrats may
have better qualifications than the wealthy donors and politicians to
disburse research money, since the bureaucrats usually have science
backgrounds themselves, and rely on peer review by other scientists.
However, the bureaucrats usually do not have the power to determine
the overall budget for a given funding agency – this is the domain
of politicians, and in the U.S., the politicians have chosen to
shrink the available funds for science (along with all other areas of
discretionary funding) in order to keep taxes low, maintain the
government's ability to borrow money at reasonable interest rates,
and protect entitlement programs, such as Social Security and
Medicare. In other words, the democratic process in the U.S. has
resulted in a reduction in the overall level of public funding for
science. For all Hossenfelder's praise of putting science funding
under the control of a democratic process, could she really find
fault with the resulting de-funding of science?
Fourth, Hossenfelder does not think
scientists should “waste time on marketing” their projects to
donors. This line of argument crops up in her discussion of
crowdfunding. However, under the public model, in the U.S.
scientists have simply outsourced such marketing to their
professional societies, who use volunteers and professional lobbyists
to make the case for public science funding to Congress. Make no
mistake – this is still marketing, just at a macro level instead of
a micro level. Yes, individual scientists are spared the indignity
of begging for money, but in my view this may be a bad thing, because
it perpetuates the sense of entitlement. When you have to beg for
money, you have no illusions about who is putting the bread on your
table.
Funding for science will always be
limited because there are other demands on society's resources.
Without constraint, funding for science could become a bottomless
pit, because there is no end to the number of questions and research
programs that could be formulated, nor to the numbers of people
recruited to pursue them. Finite resources force us to be selective
and make priorities, and the resulting fiscal and scientific
discipline can often be healthy.
I believe there needs to be plenty of
room in science funding for both kinds of funding, private and
public. Wealthy donors should have the opportunity and the
discretion to contribute to the funding of scientific research, as
long as the products of that research (journal papers) are subjected
to the same peer review process expected of publicly funded science,
and funding sources and other potential conflicts of interest are
disclosed. Meanwhile, public funding of science must also continue,
in order to alleviate the market failure discussed above. However,
its level and distribution will continue to be unstable and subject
to contraction at the whim of Congress and the voters who elect its
members. For science to continue as a viable enterprise, its funding
must depend on a balanced funding ecosystem with both public and
private sources.
One solution to Hossenfelder's concerns
is to encourage wealthy donors to make use of peer review within
their philanthropic foundations. For instance, the donors can play a
macro role in determining how much to give for a certain broad sector
of research (which after all is what Congress does when it hands the
National Science Foundation a budget) and let individual scientists
submit proposals to the foundation's expert panel for peer review.
This alleviates some of the issues she has with crowdfunding. I also
agree with Hossenfelder that the door should be opened for private
citizens (regardless of wealth or income) to donate directly to
government funding agencies, with a moderate level of direction. For
instance, the donor might direct a gift to a division of the National
Cancer Institute, but the funds would be disbursed via the usual peer
review process by that division.
An alternate model is for a scientist
to fund his or her own research, thus guaranteeing their intellectual
independence. Stephen Wolfram famously did so by financing his
unconventional computational research with earnings from a successful
software company. Julian Barbour also does unconventional research,
funded by his “actual” career as a scientific translator, plus a
lucky break in obtaining some farmland that he rented to his brother.
See the preface to Barbour (2001). I am also inspired by the story
of American musical composer Charles Ives, who wrote highly
unconventional music. He didn't want his family to “starve on his
dissonances,” so he pursued a highly accomplished career in the
insurance industry, making music his hobby. During his lifetime he
was known as an actuary and influential insurance executive, but
history primarily remembers him for his music. Only true
independence gives you the time and space to think for yourself.
However, a caveat should be made.
Hamming (1997, p. 353) has been critical of the independence given to
members of the Institute of Advanced Study (IAS, Princeton, NJ)
because most of them “continued to work on the same problems which
got them there but which were generally no longer of great importance
to society.” Hamming notes that only a “few, like von Neumann,
escaped the closed atmosphere of the place with all its physical
comforts and prestige, and continued to contribute to the advancement
of Science.” One major difference is that Wolfram and Barbour had
to work hard at their day jobs, while the IAS guys were paid to be
pure theorists.
A personal note
Incidentally, I myself departed from
basic research partly because I felt uncomfortable with the social contract
between academic research and the public that funds it. It seemed to
me that the social contract was non-existent. The public has to take
a leap of faith that in the long run, funding science would result in
societal benefits. There is evidence that this is true in the
aggregate, but I could never justify why any particular line of basic
research that I might pursue would result in a tangible benefit to
society, one that justifies the coercive power of government to
secure my funding. On the other hand, I am comfortable as a taxpayer
and voter with the fact that I help fund blue sky research by others.
References
Julian Barbour, 2001: The End of Time:
The Next Revolution in Physics. (Corrected edition.) Oxford
University Press.
Richard W. Hamming, 1997: The Art of
Doing Science and Engineering: Learning to Learn. Gordon and
Breach.
William Hooke, 2015: Reaffirming the
social contract between science and society. Eos, 96 (6): 12-13.
Donald Sarewitz, 2013: Science's
rightful place is in service of society. Nature, 502: 595.
Wednesday, April 15, 2015
Scientific software reproducibility
Writing in Nature, Erika Check Hayden describes steps being taken by the journal Nature Biotechnology to improve the reproducibility of computational results. She cites cases where error-prone software led to the publication of incorrect results in good journals. Lack of software documentation seems to be one of the major issues; software testing and reproducibility were also mentioned. The article also delves into the challenges faced by such a policy, such as finding qualified reviewers, and the danger of public shaming.
DTLR feels that such challenges should not be allowed to block efforts to reform publication standards. I endorse the stance taken by Nature Biotechnology, which can be found here.
DTLR feels that such challenges should not be allowed to block efforts to reform publication standards. I endorse the stance taken by Nature Biotechnology, which can be found here.
Sunday, April 12, 2015
Reproducible research in the Chronicle of Higher Education
Writing for the Chronicle of Higher Education last month, Paul Voosen covers the National Institutes of Health's work in encouraging reproducible research. (The article is behind a pay-wall, so I have not linked it here.) The NIH seems to be getting its act together. The article points to universities, however, as the weak link. "Indeed, more than any part of the scientific system, the universities have been ignoring the replication crisis," the article states, attributing the thought to Glenn Begley, author of the well known Amgen study of non reproducible research (Begley & Ellis, 2012). This has the ring of truth to me. As I've stated before, I commend the NIH for finally acknowledging the issue and taking steps to remedy it, some of which are detailed in the brief article. The NIH and other funding agencies, as well as prominent journals, must necessarily take a top-down approach. However this needs to be complemented by a bottom-up approach, the embracing of a cultural change by scientists in the trenches. Such a change will be opposed by those who benefit under the status quo, as Arturo Casadevall suggests in the article. The tight and thus highly competitive funding environment exasperates the problem, as his colleague Ferric Fang notes in the article. I would argue that scientists need to remember why they became scientists, and they need to be angry about the non-reproducible research that pollutes the academic literature. However, this is a tough thing to say when labs have to fight for funding survival. Anger and idealism don't do much good if your lab has gone out of business and you failed to achieve tenure. It's a conundrum.
I highly recommend Voosen's article to DTLR readers.
C. G. Begley and L. M. Ellis, 2012: Raise standards for preclinical cancer research. Nature, 483: 531-533.
Paul Voosen, 2015: Amid a sea of false findings, the NIH tries reform. Chronicle of Higher Education, March 20, 2015, page A12.
I highly recommend Voosen's article to DTLR readers.
Reference
C. G. Begley and L. M. Ellis, 2012: Raise standards for preclinical cancer research. Nature, 483: 531-533.
Paul Voosen, 2015: Amid a sea of false findings, the NIH tries reform. Chronicle of Higher Education, March 20, 2015, page A12.
Saturday, March 28, 2015
Limiting the number of mistakes when using big data
A few weeks ago, the Economist's Technology Quarterly had an excellent profile of biostatistician Dr. Susan Ellenberg in its 'Brain Scan' column. The article describes her long and influential career in clinical trials, spanning the NIH, the FDA, and academia. Example include her insistence that patients who did not follow the trial protocol be tracked, her championing of surrogate endpoints in cancer and HIV trials, her involving of patient groups in planning clinical trials, and her work on interim analysis and vaccine safety.
The article brings us to the present, and the benefits and hazards of using big data, which is typically observational data found in massive health care records databases, such as those owned by healthcare and insurance organizations. Ellenberg summarizes the issues as follows: "The more people you have the richer your database will be but also the more ways there are to be misled by the data." The article concludes, "We've got all this data...The answer isn't to ignore it. The answer is to figure out how to limit the number of mistakes we make."
The article does not give examples of such mistakes, but readers steeped in statistical thinking can come up with examples of their own. Multiplicity is attached to many such mistakes; a phenomenon that can lead to identifying spurious correlations. For instance, a blind search for correlated variables in such databases, perhaps assisted by subsetting and subgrouping, is bound to find many spurious correlations by chance alone. Few of these would be reproduced in other data sets or in future data. For those that are reproduced, the direction of causality, if there is one, may be unclear; alternately a lurking variable (one not measured or captured in the database) may hold the causal insights. One way to protect ourselves from such mistakes is to use findings from big data only as hypotheses, to be confirmed by prospective, randomized, and blinded trials. The short article goes into neither the nature of the mistakes or possible remedies. Nonetheless, the article plays a valuable role in tamping down expectations of big data, a term that has received a great deal of hype in recent years.
The article brings us to the present, and the benefits and hazards of using big data, which is typically observational data found in massive health care records databases, such as those owned by healthcare and insurance organizations. Ellenberg summarizes the issues as follows: "The more people you have the richer your database will be but also the more ways there are to be misled by the data." The article concludes, "We've got all this data...The answer isn't to ignore it. The answer is to figure out how to limit the number of mistakes we make."
The article does not give examples of such mistakes, but readers steeped in statistical thinking can come up with examples of their own. Multiplicity is attached to many such mistakes; a phenomenon that can lead to identifying spurious correlations. For instance, a blind search for correlated variables in such databases, perhaps assisted by subsetting and subgrouping, is bound to find many spurious correlations by chance alone. Few of these would be reproduced in other data sets or in future data. For those that are reproduced, the direction of causality, if there is one, may be unclear; alternately a lurking variable (one not measured or captured in the database) may hold the causal insights. One way to protect ourselves from such mistakes is to use findings from big data only as hypotheses, to be confirmed by prospective, randomized, and blinded trials. The short article goes into neither the nature of the mistakes or possible remedies. Nonetheless, the article plays a valuable role in tamping down expectations of big data, a term that has received a great deal of hype in recent years.
Thursday, February 12, 2015
Physicists as Secretaries of Defense and Energy?
This post is the result of some
superficial searches on Wikipedia, prompted by the Senate confirmation of
President Obama's nominee for Secretary of Defense. These musings
are perhaps at the periphery of the blog's range of topics, and I
will avoid commenting on the nomination's political or national
security implications. Instead I will broaden the discussion to
include past national security appointments as well as the Secretary
of Energy post.
National security
In December, the President nominated
Dr. Ashton Carter as his fourth Secretary of Defense. The Senate
voted to confirm him earlier today. Carter has a Ph.D. in physics
from Oxford University, and will be the second Ph.D.-level physicist
to hold that office. Dr. Harold Brown, President Jimmy Carter's
Secretary of Defense, was the first. In considering both physicists,
one might also add Dr. William Perry, President Clinton's second
Secretary of Defense, who has a Ph.D. in mathematics. All three were
nominated by Democratic Presidents, and could be thought of as
technocrats with extensive backgrounds in national security
(including earlier stints in the DOD administration) in addition to
their scientific credentials. Brown had previously served as
Secretary of the Air Force, and was president of Caltech at the time
of his nomination; Perry and Carter had earlier been deputy
secretaries of defense. They contrast with most other recent
SecDefs, who come from the political or business realms (with the
notable exception of Dr. Robert M. Gates, about whom more later).
Both Perry and Carter were nominated after other prominent candidates
removed themselves from consideration. Perry was nominated after
Vice Admiral Bobby Ray Inman famously withdrew his nomination, after
initially accepting. More recently, Carter was nominated after
others (Senator Jack Reed and former Undersecretary of Defense for
Policy Michele Flournoy) allegedly declined to be considered. Of the
three (Brown, Perry, and Carter), only Perry seems to have had actual
military service. Brown has recently published his
memoirs, Star Spangled Security. An account of Brown's DOD can also
be found in chapter 10 of General Colin Powell's memoirs, My American
Journey.
Some other notable appointments in
recent history could be mentioned. President Jimmy Carter's
Secretary of the Air Force was a Ph.D. physicist, Dr. Hans Mark.
President Bill Clinton's Secretary of the Air Force was prominent MIT
aerodynamicist, Dr. Sheila Widnall. Both Mark and Widnall eventually
returned to academia. President George W. Bush's Secretaries of the
Navy were Gordon England, who had majored in electrical engineering
in college, and Dr. Donald C. Winter, a physicist. Both have had
careers in industry and government. The same president's second
Secretary of the Army was Dr. Francis J. Harvey, a metallurgist,
whose previous career had been in industry. Harvey was fired by
Secretary Gates in the wake of the Walter Reed Army Medical Center
scandal.
Among the presidential national
security advisers, Admiral John Poindexter from the Reagan
administration (and the Iran-Contra scandal) comes to mind. He has a
Ph.D. in physics, having studied under Nobel laureate Rudolf
Mossbauer. Among the CIA directors, one thinks of Dr. John M. Deutsch, an MIT physical chemist, former Deputy Secretary of Defense,
and former Undersecretary of Energy. His career was tainted by an
investigation into mishandling classified information, for which he
was ultimately pardoned by President Clinton.
Energy
Both of President Obama's Secretaries
of Energy are physicists. His first, Dr. Steven Chu, is a Nobel
Laureate, while his second, MIT's Dr. Ernest Moniz, had previously
served as an Undersecretary of Energy during President Clinton's
second term. Despite DOE's obvious connection with physics, they
seem to be the first two Ph.D.-level physicists to hold that office.
President George W. Bush's second Secretary of Energy, Dr. Samuel W.
Bodman, has a Ph.D. in chemical engineering. His career started in
academia, then moved into finance, and finally government service,
serving as Deputy Secretary at both Treasury and Commerce before
taking the helm at DOE. Otherwise, like at Defense, most Energy
Secretaries have a background in politics or business; a number have
had extensive experience at DOD as well. From this perspective, Chu
and Moniz seem again to be technocrats like Brown, Perry, and Carter
at Defense.
Other physicists
Like the vast majority of presidential
science advisers, Obama's is a physicist, Dr. John Holdren. For a
while, a Nobel laureate physicist served in a senior position in
Holdren's office, Dr. Carl Wieman. The current director of the
National Science Foundation (NSF), Dr. France Cordova, is an
astrophysicist; she is the former president of Purdue University.
The lead positions of agencies such as the Office of Science and
Technology Policy, the NSF, the NIH, and the CDC, are almost always
held by scientists.
What to make of all this?
President Obama's cabinet will be
unique in having two Ph.D.-level physicists serving simultaneously on
it. (Both Carter and Moniz are also fellows of the American Physical
Society.) I am not aware of physicists serving in any other
cabinet-level office in recent history besides DOD and DOE. (President Clinton's
attorney general, Janet Reno, was a chemistry major in college.) The
Obama cabinet also has, as its second Interior Secretary, Sally
Jewell, who has a bachelor's degree in mechanical engineering. She
started her career in the oil industry, and later moved into banking.
Cabinet appointments are usually
political, and technocrats (like the Ph.D.-physicists under
discussion) seem to be in the minority. Looking at other cabinet
offices within the purview of this blog, we see that at Health and
Human Services, evidently only one medical professional has actually
ever served as Secretary since the department was separated from
Education: Dr. Louis Sullivan, under the first President Bush.
The preceding comments have been
largely factual; what follows is opinion and speculation. It does
not seem to me that a background in science or technology would
necessarily add or subtract from the qualifications of a cabinet
level official. Given the role of technology in the armed forces and
in energy, it does not surprise me that, among the many career paths
that lead to a cabinet level position in defense or energy, science
or engineering might be included. However, such a background is
neither necessary nor sufficient. Arguably the finest SecDef in
recent memory, Dr. Robert M. Gates, did not have a background in
science or engineering. His doctorate was in Russian and Soviet
history.
The appointment of Gates is nearly as
exceptional as the appointment of Ph.D.-technocrats, for like them
Gates represents a small number of SecDefs who have had primary
careers in national security, as opposed to political and business
leaders (with perhaps particular expertise in national security).
Disclaimer: I have not considered
Acting Secretaries in the above account. Moreover, the use of
Wikipedia has naturally limited the accuracy of the information I
report above. Readers are invited to submit corrections or other
perspectives in the Comments.
References
Harold Brown with Joyce Winslow, 2012:
Star Spangled Security. Brookings Institution Press.
Colin Powell with Joseph E. Persico,
1995: My American Journey. Random House (New York).
Thursday, January 29, 2015
More on Reproducible research
Earlier this week, Joel Achenbach covered reproducible research in a Washington Post article. This follows hot on the heels of the Science News series mentioned in my last post. This is a topic that has received much discussion within the scientific literature, such as in Nature and Science, and bled into the popular press, for instance, with an Economist cover story in the fall of 2013, and a National Public Radio piece last fall by Richard Harris. Achenbach's article doesn't have anything particularly new for those who have been following this thread over the last few years (including readers of this blog). However, this topic deserves attention from major news organizations such as the Post and the Economist. The taxpayers, after all, are bankrolling much of scientific research, and deserve to be kept in the loop on how their money is spent, or mis-spent as the case may be.
I also want to call readers' attention to an opinion piece last fall by John Ioannidis (2014). It is a forward looking piece on how to make research more reproducible. Much of the paper is focused on the infrastructure of the scientific community, including the incentive systems. Ultimately this is indeed where change must occur. He also has a list of "Some research practices that may help increase the proportion of true research findings." Some of these are not explained in detail in this paper. Third to last on his list is "Improvement of study design standards," an issue I feel is paramount. Unfortunately Ioannidis does not go into great detail on this particular point, though it could deserve a paper all of its own.
My feeling is that, despite all the attention, reproducible research is not yet a big deal in the scientific community. Scientists, and those who fund them, aren't angry enough yet to push for serious changes. Until that happens, DTLR will not rest in promoting reproducible research practices.
John P. A. Ioannidis, 2014: How to make more published research true. PLoS Medicine, 11 (10): e1001747.
I also want to call readers' attention to an opinion piece last fall by John Ioannidis (2014). It is a forward looking piece on how to make research more reproducible. Much of the paper is focused on the infrastructure of the scientific community, including the incentive systems. Ultimately this is indeed where change must occur. He also has a list of "Some research practices that may help increase the proportion of true research findings." Some of these are not explained in detail in this paper. Third to last on his list is "Improvement of study design standards," an issue I feel is paramount. Unfortunately Ioannidis does not go into great detail on this particular point, though it could deserve a paper all of its own.
My feeling is that, despite all the attention, reproducible research is not yet a big deal in the scientific community. Scientists, and those who fund them, aren't angry enough yet to push for serious changes. Until that happens, DTLR will not rest in promoting reproducible research practices.
Reference
John P. A. Ioannidis, 2014: How to make more published research true. PLoS Medicine, 11 (10): e1001747.
Tuesday, January 27, 2015
Science news on reproducible research
Thursday, January 1, 2015
Let there be a year of light
Happy New Year, readers! DTLR joins in celebrating 2015 as UNESCO's "International Year of Light and Light-based Technologies" (IYL 2015). The year 2015 will also be the anniversary of several optics achievements, such as the publication of Maxwell's equations in 1865, celebrated in January's issue of Nature Photonics. The IYL 2015 organizers also point to the following anniversaries:
- 1015: Ibn Al Haythem's Book of Optics.
- 1815: Fresnel and the wave nature of light.
- 1915: Einstein's general relativity - light in space and time.
- 1965: Cosmic microwave background; Charles Kao and optical fiber technology.
Subscribe to:
Posts (Atom)