Wednesday, December 31, 2014

Calculating logarithms?!

As I was culling my book collection, I came across three delightful books by Bob Miller, a CCNY math professor:  his Precalc Helper, Calc I Helper, and Calc II Helper, all published in 1991 by McGraw-Hill's Schaum division.  That was around the time I started taking calculus courses, though I do not recall using those books.  I may have acquired them shortly after I completed my year of calculus.

Regardless, as I was leafing through these books on this last day of 2014, nearly a quarter century after they were published, I was particularly struck by Miller's treatment of logarithms.  In the Precalc Helper, Chapter 12, "Modern Logarithms", begins with this paragraph (p. 75):
We will do a modern approach to logs.  Modern to a mathematician means not more than 50 years behind the times.  We will not do calculations with logs (calculations involving characteristics and mantissas).  This is no longer needed because we have calculators.  What is needed is a thorough understanding of the laws of logarithms and certain problems that can only be solved with logs.
If I ever did calculations with characteristics and mantissas, I certainly don't remember them now. It is likely that my high school had already abandoned coverage of that topic by the time I was a student.

Then on page 1 of the Calc II Helper, opening the first chapter, titled "Logarithms", we find the following passage.
Most of you, at this point in your mathematics, have not seen logs for at least a year, many a lot more.  The normal high school course emphasizes the wrong areas.  You spend most of the time doing endless calculations, none of which you need here.  By the year 2000, students will do almost no log calculations due to calculators.  In case you feel tortured, just remember that you only spent weeks on log calculations.  I spent months!!!
I suspect the future arrived a lot sooner than Miller thought it would.  When I was in high school in the late 1980s, we were using a then-new software program, Derive, to graph mathematical functions.  In college, we were using Mathematica.  When I was a teaching assistant in graduate school, graphing calculators were already pervasive, and my students were allowed to use them on exams.  (I have never owned one myself.)  Evidently graphing calculators are still in use, though equivalent apps have been available for smart phones for a few years now.  (I have never owned a smart phone either, but I suspect this will have to change one day.)

At this point I am too far out of touch with mathematics teaching and technology to know what is considered standard of practice.  Nonetheless I am grateful I never had to do endless logarithm calculations by hand.  Make no mistake, logarithms are essential for science, engineering, and medicine.  In fact, I worked with logarithms at work earlier today.  But I let the computer do the calculating.

Tuesday, December 30, 2014

Congratulations to arXiv

According to Nature, the arXiv preprint server has reached one million articles in its holdings.  DTLR congratulates arXiv.org, and its founder Paul Ginsparg, on this achievement.

Sunday, December 7, 2014

A Review of Abrahm Lustgarten's "Run to Failure"

On April 20, 2010, the Deepwater Horizon oil drilling platform was completing the task of drilling a pipe into BP's Macondo well in the Gulf of Mexico.  The pipe experienced a blowout, and the blowout preventer failed, resulting in an explosion and eventual sinking of the platform.  Eleven workers were killed, and seventeen were seriously injured.  The rupture of the pipe resulted in a massive oil spill event that lasted 86 days.

The disaster was eminently preventable.  Investigation of its causes has focused on a number of technical and engineering issues; however the larger context was BP's corporate culture.  Understanding that culture requires a deeper study of BP's checkered history of operations management and industrial safety.  The book Run to Failure, by Abrahm Lustgarten (2012), provides just that.  Written in conjunction with the Frontline documentary, The Spill, it provides an in-depth examination of BP's history in North America, beginning in 1989 when John Browne was named head of worldwide exploration and production.  Browne would later become BP's chief executive, and on his watch there were major disasters at two of BP's legacy assets:  its Texas City refinery and its operations on Alaska's north slope, site of its Prudhoe Bay oil fields, as well as an extensive pipeline network.  These legacy assets were considered sources of revenue to be milked as much as possible, but they were not opportunities for growth, and thus infrastructure investments were minimized.

After the prologue, which describes the Deepwater Horizon accident and introduces the book, the next fourteen chapters are dedicated to events prior to that accident.  We observe a corporate culture where site managers were frequently rotated, while being pressured to produce financial results.  This produced a short term mentality, perpetual cost cutting, and an avoidance of investing in infrastructure maintenance, even where safety and the environment were at risk.  Safety management focued on the less expensive "slips and trips" rather than the vastly more expensive process safety.  Workers who raised concerns were ignored, and whistleblowers were blacklisted.  An attitude of "run to failure" pervaded at BP's legacy assets.  However, even BP's preferred areas for investment, such as the Gulf of Mexico, provided an example of corner-cutting in the rush to start making money.  The near sinking of BP's Thunder Horse platform during Hurricane Dennis in 2005 was caused by the mistakenly backward installation of several check valves in the platform's pontoons.

BP's poor safety record is compared unfavorably with those of other major oil companies, particularly Exxon, which seems to have taken to heart the lessons of the notorious Exxon Valdez oil spill.  The rate of spills and other process accidents for BP was usually several times higher than that of its competitors.

The last two chapters, and the epilogue, return to the Macondo well and the Deepwater Horizon accident.  The exposition of events reveals a series of poor decisions as well as equipment failures that all point to a culture of corner cutting in the rush to get results.  It provides a case study of engineering and business decision analysis and ethics.  The book ends with evidence that BP hasn't really changed its corporate culture, and implies that the company's next disaster will occur on Alaska's north slope.  A post from earlier this year in the Columbia Journalism Review, by Alexis Sobel Fitts, shows that BP is even now aggressively trying to influence public perception of the Deepwater Horizon disaster.

One issue that arises is the role of federal and state government regulators.  The author discusses this issue, including a number of agencies, though the primary emphasis is on the Environmental Protection Agency.  This is perhaps due to his access to very candid sources from that agency.  There is relatively little discussion of the US Department of Interior's Minerals Management Service (MMS); fortunately you can read more about the role of this obscure agency in a May, 2010, Rolling Stone article by Tim Dickinson.  I wish that Lustgarten had incorporated more discussion of other regulators, including Dickinson's findings.

Run to Failure has been reviewed in a number of scientific journals such as Nature (Mascarelli, 2012).  The most useful reviews in my view are those by Peter Dykstra at Enivonmental Health News (here), and Matthew T. Huber (2013) in Contemporary Sociology.  I strongly recommend this book for those interested in engineering and business ethics, corporate culture, and the energy industry.



References


Matthew T. Huber, 2013:  Review of Lustgarten (2012).  Contemporary Sociology, 42:  400-401.

Abrahm Lustgarten, 2012:   Run to Failure:  BP and the Making of the Deepwater Horizon Disaster (W. W. Norton, New York).

Amanda Mascarelli, 2012:  Plumbing the depths.  Nature, 483:  154-155.

Tuesday, December 2, 2014

Congratulations to the Royal Society

The Royal Society of London is celebrating the 350th anniversary of its Philosophical Transactions, the world's oldest scientific journal. The journal has pioneered many of the features that we associate with scientific journal publishing, such as peer review, establishing priority, archiving, and dissemination.  The journal evidently was founded in 1665 by the Society's secretary, Henry Oldenburg.

This Royal Society blog post by Julie McDougall-Waters provides some context to the celebration.  DTLR joins in congratulating the Royal Society in celebrating this anniversary.

Thursday, November 6, 2014

Journals unite for reproducibility: DTLR is pleased!

Today Nature and Science posted joint editorials (here and here) endorsing a Proposed Principles and Guidelines for Reporting Preclinical Research, posted at the U.S. National Institutes of Health.  It is the product of a June, 2014, workshop sponsored by NIH and the two journals, and endorsed by over 30 other biomedical journals.  It provides a bare minimum list of criteria for good reporting practices of animal experiments in biomedical research.  The list is less detailed than one given by Landis et al. (2012) which they cite.

DTLR joins in endorsing the proposed principles and congratulates all the participants for taking a major step forward in promoting reproducible research.  It reflects a discipline-wide concern about the metastasis of non reproducible research across the spectrum of journals, as abundant evidence has made clear in recent years.  The proposed principles emphasize study design issues such as randomization, blinding, sample size, and appropriate replication.  Appropriately, such issues receive more space than analysis.  (The use of the term "inclusion/exclusion criteria" is a little confusing here - in clinical research, this refers to patient enrolment criteria; but the authors here seem to use it to refer to selective reporting and data omission, certainly an important issue, but one I would have found other language to describe.)  The sharing of data sets and the full disclosure of biological reagents are also welcome features.  In general, materials and methods sections of papers really should be expanded such that an independent laboratory could reproduce the experiment and expect to obtain similar results.

DTLR does not believe that the proposed principles go far enough, however.  For instance, the section on statistics requires a disclosure of "the statistical test used"; the fact that the emphasis here is on a statistical test rather than an estimation procedure is a serious oversight, in my view.  The reporting of confidence intervals instead of tests provides a sense of magnitude and direction that is lacking in a p-value, allowing evaluation of both clinical and statistical significance.  A statistical test outcome only communicates statistical significance. A confidence interval implicitly reports a test result when the confidence limits are compared with zero (for a conventional null hypothesis test).  Other types of estimation (tolerance intervals, prediction intervals) may be more appropriate in some situations.

DTLR is glad to see a growing consensus in the scientific community that nonreproducible research is corrosive and must be reduced.  This is a terrific step forward, but just one step.  Now, individual laboratories must take these guidelines to heart and use them to improve the design, execution, analysis, and reporting of their studies. 

References


Nature, vol. 515, p. 7 (2014).

Science, vol. 346, p. 679 (2014).

S. C. Landis, et al., 2012:  A call for transparent reporting to optimize the predictive value of preclinical research.  Nature, 490:  187-191.

Sunday, August 17, 2014

Evolution of airplanes: a follow-up

I thank Prof. Bejan for graciously replying to my critique of his work in a previous post.  Permit me to follow up briefly here.

Bejan is correct that his earlier publications have cited Tennekes and the others.  He is also right that the earlier writers did not include land and aquatic locomotion in their analyses.  I was aware of Bejan's 2006 paper with Marden (which cites Tennekes only as a data source, not for analysis) but have not seen his 2000 book published by Cambridge U.P.  I thank Prof. Bejan for clarifying these points, although my original post did make many of them already.

Nonetheless, Bejan's 2014 paper makes a specific point about the Concorde case, which Tennekes has discussed at length, as I showed.  A citation in Bejan's 2014 paper in the context of the Concorde discussion would have been pertinent for readers.  As it stands, the 2014 paper makes it seem that the 'outlying' nature of the Concorde on the diagram is a new finding, when it is not.

Bejan's comment also offers a very important distinction, one that I strongly affirm.  A purely empirical analysis of observational data is a wholly different activity to first-principles modeling of such data, especially when the latter is then validated by empirical data.  Many scientists indeed fail to appreciate this distinction.  However, the quantitative predictive modeling in Bejan's 2014 paper seems to be based on basic aerodynamic scaling arguments.  The link with evolution seems at best a metaphor; it is not clear to me that the evolutionary component of Bejan's work is predictive in any quantitative sense.  I stand by my previous comments on interpreting data, particularly the pteranadon case.  Being an outlier on the graph does not prevent the pteranadon for being fit for its ecological niche in its day.  This would seem to limit the scope of the evolutionary metaphor when linked to specific aerodynamic scaling arguments.   My methodological criticisms of correlation analysis also remain valid.


Tuesday, August 12, 2014

The "evolution" of airplanes: DTLR is not impressed



About three weeks ago, the Journal of Applied Physics published a paper by Adrian Bejan and collaborators, “The evolution of airplanes” (Bejan et al., 2014).  Bejan is a named professor of mechanical engineering and materials science at Duke University, and author of well-known textbooks on heat transfer and thermodynamics.  His co-authors are a Boeing engineer and Duke alum, Jordan Charles, and a French civil engineering professor, Sylvie Lorente, who is also an adjunct Duke professor.  The publisher and Bejan’s university both issued news releases about the paper, and Bejan wrote about his work at The Conversation.  Indeed, the paper has received a lot of online press coverage.  The publisher’s own Inside Science news organ did include some critical comments in its coverage; additional critical comments were also posted at The Conversation in response to Bejan’s post.  The criticisms focus on the overall logic and philosophy of the paper.  I strongly sympathize with these criticisms.  Here, however, I will provide an additional perspective beyond those aired by others thus far.

The paper presents a number of simple analyses including basic aerodynamic scaling arguments, compared favorably with empirical data about aircraft geometry and performance.  A particularly vivid graph in the paper shows empirical data comparing the body mass and velocity of airplanes with those of running, flying, and swimming animals.  The diagram (the paper’s Fig. 2) is reproduced below.

The Ref. 1 in the caption is Bejan and Marden (2006).  The authors make the point that the Concorde is an outlier in this diagram, and further comment as follows.

Looking at the graphs of this paper, we see that there is an outlier, the Concorde, which was perhaps the most radical departure from the traditional swept wing commercial airplane.  The Concorde’s primary goal was to fly fast.  In chasing an “off the charts” speed rating the Concorde deviated from the evolutionary path traced by successful airplanes that preceded it.  It was small, had limited passenger capacity, long fuselage, short wingspan, massive engines, and poor fuel economy relative to the airplanes that preceded it.  Even when it was in service, the Concorde did not sell, and only 20 units were ever produced (whereas successful Boeing and Airbus models were produced by the thousands).  Eventually, due to lack of demand and safety concerns, the Concorde was retired in 2003.  (Bejan, et al., 2014, p. 6.)

Except for the remark about the "evolutionary path", all of this is factual.  However, many of these observations are not original.  In a book published originally in Dutch in 1992, Henk Tennekes (2009) presents the following graph (his Fig. 2) comparing cruise speed and body weight; in the graph he tacitly ties cruise speed to wing loading (weight divided by wing surface area).  Although it does not include running and swimming animals, the graph is otherwise similar in spirit to Bejan et al.’s graph.

Tennekes attributes this sort of analysis to the former DuPont company head, Crawford H. Greenewalt, and later scholars, including Colin J. Pennycuick.  Greenewalt’s original analysis was published in 1962; see Tennekes (2009) for citations and sources of data.  Tennekes also derives a simple scaling formula relating wing loading to cruise speed.  The equation 2 referred to in the caption is a version of this scaling formula.

What does Tennekes have to say about the Concorde?  In Chapter 1, he writes the following.

Wasn’t it supposed to fly at about 1,300 miles per hour?  How come it didn’t have higher wing loading and therefore smaller wings?  The answer is that the Concorde suffered from conflicting design specifications.  Small wings suffice at high speeds, but large wings are needed for taking off and landing at speeds comparable to those of other airliners.  If it could not match the landing speed of other airliners, the Concorde would have needed special, longer runways.  The plane’s predicament was that it has to drag oversize wings along when cruising in the stratosphere at twice the speed of sound.  It could compensate somewhat for that handicap by flying extremely high, at 58,000 feet.  Still, its fuel consumption was outrageous.  (Tennekes, p. 18)

And in the preface, Tennekes writes:

The Concorde went out with a bang.  A fiery crash near Paris on July 25, 2000, signaled the end of its career….In retrospect, the Concorde was a fluke, more so that anyone could have anticipated.  From an evolutionary perspective it was a mutant.  It was a very elegant mutant, but it was only marginally functional.  The fate of the Concorde inspired me to draw parallels between biological evolution and its technological counterpart wherever appropriate.  (Tennekes, p. xii)

Tennekes has more extensive comments on the Concorde in Chapter 6.  At his doctoral thesis defense, he argued “that supersonic airliners would be a step backward in the history of aviation” (p. 165).  He explains that with supersonic flight, the aircraft would have to generate shock waves in the air, which requires “a lot of energy” (p. 166).  On the same page,
Although Concorde passengers didn’t notice anything as their plane penetrated the sound barrier, the economic barrier was real enough.  If you want to exceed March 1, it will cost you 3 times as much as staying below the speed of sound.  For the aircraft industry, supersonic flight was indeed a step in the wrong direction.  Time and again, before aeronautical engineers started dabbling with supersonic flight, they had managed to reach higher speeds and lower costs.  The Concorde broke that trend.

I think it is unfortunate that both Bejan and Tennekes are tempted by the evolutionary metaphor; the critical comments I alluded to in my opening paragraph zero in precisely on this aspect of the work, as well as Bejan’s “constructal law” which he also purports to be at work here.  (I won’t bother to discuss that aspect further.)  Nonetheless the authors are correct that the empirical data and aerodynamic scaling relationships are consistent with each other, and possibly of limited use and interest.  They should not, however, be used to narrow one’s thinking.  For instance, in Tennekes’ plot, a number of animals show up as more severe ‘outliers’ than the Concorde.   Tennekes states that deviations from the trend line may be justified.  The pteranadon, for instance, was a soaring animal.  In prehistoric times there were no polar ice caps, reducing the atmospheric temperature gradient between the poles and equator, compared to today.  As a result there was less wind back then.  He presents other examples, including aircraft.  More generally, just because the bulk of the data fall along a trend line or curve, data away from that trend should not necessarily be deprecated.  Furthermore, correlation should not be confused with causation.  Bejan et al. (2014) offer no such nuances or caveats in their discussion.  Consequently they exaggerate the importance and implications of their findings.

It is also of great concern that Bejan et al. (2014) do not cite, either in the main paper or their supplemental information, Tennekes' work, particularly in the context of the Concorde discussion.  This is unusually poor scholarship.  (Bejan does cite Greenewalt and Pennycuik in an earlier paper, Bejan and Marden, 2006.)  Bejan et al. (2014) also make pointless, tautologous statements such as “Small or large, airplanes are evolving such that they look more and more like airplanes, not like birds” and then in the next paragraph, “Small or large, airplanes are evolving such that they look the same.”  Their abstract ends with the non-sequitur, “The view that emerges is that the evolution phenomenon is broader than biological evolution.  The evolution of technology, river basins, and animal design is one phenomenon, and it belongs in physics.”  Such statements are unjustified, unhelpful, and provide heat rather than light to the discussion.

A technical point should also be made:  at one point, Bejan et al. (2014) comment on their data analysis that “the correlation is statistically meaningful because its P-value is 0.0001, and it is less than 0.05 so that the null hypothesis can be rejected”.  This is a fairly naive and unimpressive statement.  The 0.05 threshold is conventional but totally arbitrary; moreover, the null hypothesis is one of no correlation at all, which is an incredibly low bar to establish a “meaningful” relationship between two variables.  Statistical significance does not necessarily convey practical significance.  For instance, it is possible to make a relationship with a negligibly small slope "statistically significant" if the sample size is large enough.  Reporting any kind of statistical inference (the p-value) on observational, non-randomly sampled data is itself questionable.  Moreover, as Loh (1987) noted, the correlation coefficient does not actually measure the closeness of the data to the best fit line.  The fitted equation and coefficient of determination, which the authors do provide, are more meaningful measures of the strength of the relationship between two variables.  The great statistician John Tukey (1954) stated that "most correlation coefficients should never be calculated."

To conclude, the publication of Bejan et al. (2014) in the Journal of Applied Physics is questionable.  The work should instead have been submitted for review at an aerodynamics or aerospace engineering journal.  I suspect it might not have impressed reviewers in that community.  Moreover, the authors should have cited Tennekes (2009) who provides a more detailed and nuanced discussion of the Concorde case, and they should increase the care with which they interpret correlations in empirical data.  I think the rhetoric about evolution is superfluous and distracting from the authors' primary technical findings, and should have been dispensed with.  Other critics have focused their views on this last point, so I've not dwelt on it here.


References


A. Bejan and J. H. Marden, 2006:  Unifying constructal theory for scale effects in running, swimming, and flying.  Journal of Experimental Biology, 209:  238-248.

A. Bejan, J. D. Charles, and S. Lorente, 2014:  The evolution of airplanes.  Journal of Applied Physics, 116:  044901 (6 pages).

Wei-Yin Loh, 1987:  Does the correlation coefficient really measure the degree of clustering around a line?  Journal of Educational Statistics, 12:  235-239.

Henk Tennekes, 2009:  The Simple Science of Flight:  From Insects to Jumbo Jets.  Revised and expanded edition.  MIT Press (Cambridge, MA).  

John Tukey, 1954:   Causation, regression, and path analysis.  In Statistics and Mathematics in Biology, edited by O. Kempthorne, T. A. Bancroft, J. W. Gowen, and J. L. Lush.  Iowa State College Press (Ames), 35-66.

Thursday, July 3, 2014

Congratulations to Alvin

Last month, the Deep Submergence Vessel (DSV) Alvin celebrated its 50th anniversary of service to oceanographic research.  As Humphris et al. (2014) note, Alvin is "the world's first deep-diving submarine and the only one dedicated to scientific research in the United States."  Named after geophysicist Allyn Vine, the half-century old submarine is returning to service after a major upgrade this year.  I recommend the article by Humphris et al. (2014) for readers interested in learning about the history of this unique vessel.

Reference


S. E. Humphris, C. R. German, and J. P. Hickey, 2014:  Fifty years of deep ocean exploration with the DSV AlvinEos, Transactions, American Geophysical Union, 95 (22):  181-182.

Nobel laureates and fluid dynamics research

Do Nobel laureates do research in fluid dynamics? As far as I can tell, the Nobel Prize has been awarded specifically for achievements in fluid dynamics exactly once, to Hannes Alfven (Physics, 1970) “for fundamental work and discoveries in magnetohydro-dynamics with fruitful applications in different parts of plasma physics.” An argument could be made for Pierre-Gilles de Gennes (Physics, 1991) who was awarded for his work on liquid crystals, and Ilya Prigogine (Chemistry, 1977) who was awarded for his contributions to non-equilibrium thermodynamics and dissipative structures. These fields are at least adjacent to fluid dynamics, with considerable overlap. However, other Nobel laureates, who won the award for other achievements, have often either dabbled or even made substantial contributions to fluid dynamics research.

Perhaps the most accomplished was Lord Rayleigh (Physics, 1904), who won the prize for studies of the densities of gases, as well as for the discovery of Argon. He made immense contributions to the theories of fluid dynamics and acoustics, particularly in the field of hydrodynamic instabilities. In the latter field we find his name attached to the Rayleigh-Taylor instability, the Plateau-Rayleigh instability, and Rayleigh-Benard convection (not to mention the Rayleigh number). Rayleigh's monograph on the Theory of Sound is a landmark publication in the history of acoustics. Another Nobel laureate, astrophysicist Subramanyan Chandrasekhar (Physics, 1983), also contributed to the theory of hydrodynamic instability, authoring the similarly landmark monograph, Hydrodynamic and Hydromagnetic Stability. As far as I know, Chandraskehar is the only Nobel laureate to have served on the executive committee of the American Physical Society's Division of Fluid Dynamics (1954-1957, as chair in 1955; long before he won the Nobel Prize).

The theory of turbulence is said to concern the greatest unsolved problem of classical physics. Werner Heisenberg (Physics, 1932) and Lars Onsager (Chemistry, 1968), both developed turbulence theories analogous to the famous Kolmogorov-Obukhov -5/3 law. Heisenberg, whose doctoral dissertation was in fluid dynamics, developed his theory while detained at Farm Hall after WWII, with Karl von Weizsacker. Lev Landau (Physics, 1962) had his own theory of turbulence, and his monograph with E. M. Lifshitz on Fluid Mechanics is one of the best known volumes of their Course of Theoretical Physics.

Other Nobel Laureates dabbled in hydrodynamic research. Edward M. Purcell (Physics, 1952) wrote a famous paper, “Life at Low Reynolds Number” (1977), and T. D. Lee (Physics, 1957) wrote “On some statistical properties of hydrodynamical and magneto-hydrodynamical fields” (1952). Albert Einstein proposed a new airfoil design in 1916, although it was not successful. Richard Feynman lectured eloquently about fluid dynamics in two chapters of the Feynman Lectures, but as far as I know he did not pursue research in the field.

The above musings were prompted by an article in last week's issue of Science, co-authored by Ahmed H. Zewail (Chemistry, 1999). The paper (Lorenz and Zewail, 2014) concerns measurements of the motion of molten lead in a single zinc oxide nanotube, using electron microscopy. The work is a contribution to the young and growing field of nanofluidics.

Readers, do you know of other Nobel laureates who have contributed to fluid dynamics or related fields? Please leave your comments if you do.

Reference


Ulrich J. Lorenz and Ahmed H. Zewail, 2014:  Observing liquid flow in nanotubes by 4D electron microscopy.  Science, 344:  1496-1500.
 

Wednesday, May 28, 2014

Efimov trimers: a discovery in molecular physics

Quanta Magazine has an interesting feature article by Natalie Wolchover about the discovery of Efimov trimers, predicted in 1970 by Vitaly Efimov.  The first reported experimental result was in 2006, but it was not considered definitive.  Evidently three different groups now have posted their results, with the first paper published and the other two currently in peer review.  Take a look at Wolchover's article.

Welcome indeed, Scientific Data

The Nature journals have recently launched a new journal, Scientific Data, for the publication of data sets with detailed descriptions.  Readers should take a look at the journal's website and an editorial in the main journal, Nature, here.  This is a welcome experiment in improving the infrastructure of science (including providing a new incentive for sharing data) and promoting reproducible research. 

Saturday, May 10, 2014

Congratulations to Geophysical Research Letters

The American Geophysical Union (AGU) is celebrating the 40th anniversary of its letters journal, Geophysical Research Letters.  They've posted a collection of 40 papers published in the journal in past decades, a sort of 'greatest hits' list, and made them open access.  You can find it here.

I only recently joined the AGU and have not been an avid reader of this particular journal, yet I congratulate AGU on this milestone.

Saturday, April 26, 2014

A review of "Farewell to Reality" by Jim Baggott

Farewell to Reality:  How Modern Physics Has Betrayed the Search for Scientific Truth, by Jim Baggott (Pegasus Books, 2013).



The author has an axe to grind with modern physics.  On television and in books about contemporary physics intended for general audiences, established knowledge is seamlessly presented along with speculation and theories (like string theory) which do not, and possibly cannot, have experimental or observational support.  Baggott makes a distinction between what he calls the “authorized version” (theories of physics with well-established empirical support) and “fairy-tale physics” (theories that lack such support).  Moreover, according to him, some physicists have advocated a “post-empirical” re-defining of the scientific method, which would cut science loose from its empirical grounding.

The book begins with a chapter on some amateur philosophy of science, where Baggott sets out the six principles that he thinks demarcate science from metaphysics.  The first is the “reality principle” which is a statement of metaphysical realism – the real world is “out there” independent of our perception of it – tempered by acknowledging that we only have access to “things as they are measured”, not “things in themselves”.  Moreover, “reality is rational, predictable and accessible to human reason.”  Second is the “fact principle” which states that facts are not theory-neutral:  “Observation and experiment are simply not possible without reference to a supporting theory of some kind.”  Third is the “theory principle” which states that any creative process used to develop a theory is acceptable as long as the resulting theory works.  How we define whether a theory works leads to the fourth principle, the “testability principle”, which states that scientific theories must be empirically testable, and for this to be possible auxiliary assumptions are required.  Moreover, no single test is decisive, since either the theory or an auxiliary assumption may be responsible for any discrepancy.  The fifth principle is the “veracity principle” which states that theories can at best be tentatively accepted, while absolute certainty is beyond reach.  The final principle is the “Copernican principle” which states that we are not privileged observers (discussed in a different context by Adams and Laughlin, 1999).  The rest of the book is divided into two parts.  The first is an exposition of the “authorized version”, and the second is titled “The Grand Delusion”, where he outlines “fairy-tale physics” and his problems with it.

Part One begins with a chapter on quantum theory, including the foundational questions.  This is followed by a chapter on quantum field theory and the standard model of particle physics, up to and including the discovery of the Higgs boson.  The next chapter tackles special and general relativity.  Then follows a chapter on the standard model of big bang cosmology, including the inflation model and the unknown nature of dark matter and dark energy.  The final chapter of Part One is about the gaps and flaws of the authorized version.  These include puzzles about quantum measurement, difficulties with the standard models of particle physics and cosmology, and the lack of a theory of quantum gravity.  Efforts to address these issues, such as dark matter searches, are discussed.  Finally, the “fine-tuning problem” is introduced:  this states that the free parameters of the universe seem unusually fine-tuned to allow for the existence of life forms to observe it.

Part Two begins with a chapter on supersymmetry (SUSY).  Baggott feels that SUSY is at least a testable theory and that we can expect experimental elucidation in the next few years.  On the other hand, he is a skeptic of SUSY because he thinks it creates just as many problems as the ones it solves.  He also points to the lack of experimental or observational evidence for supersymmetry thus far, although in my view this judgment is premature.  The next chapter takes on the numerous flaws of string theory (including superstrings and M-theory), ground previously trodden most famously by Smolin (2006) and Woit (2007).  The next chapter tackles various versions of the multiverse concept, from the “many worlds” interpretation of quantum theory, to the inflationary multiverse.  All of these are dangerous, in Baggott’s view, as they violate the testability principle.  The next chapter, “Source Code of the Cosmos,” tackles a hodge podge of ideas.  The first is Max Tegmark’s claim that the universe is a mathematical structure.  Next, he presents quantum information theory and quantum computing, in a more non-committal way.  (I assume he believes these fields do fall into the legitimate side of science, though not yet part of the authorized version, since much remains to be worked out in those fields.)  He then discusses the “black hole war” (involving ideas from general relativity, quantum theory, thermodynamics, and quantum information) which was resolved by Juan Maldacena’s holographic principle.  The fact that the “black hole war” between Stephen Hawking and Leonard Susskind was finally resolved shows that progress can be made here, but it is not the kind of progress Baggott would prefer.  The resolution of the “war” was based entirely on theoretical developments, without grounding in observational or experimental data.

The book’s penultimate chapter takes on the anthropic cosmological principle, which directly contradicts the Copernican principle that Baggott develops at the start of the book.  He also takes a swipe at the John Templeton Foundation in this chapter.  In the concluding chapter, Baggott tries to answer six questions.  First, “If fairy-tale physics isn’t science, what is it?”  Baggott’s answer is that the stuff isn’t even metaphysics, but rather “nothing but sophistry and illusion” (quoting philosopher David Hume).  Second, “But aren’t theoretical physicists supposed to be really smart people?”  He answers affirmative but gives an analogy with the financial crisis of 2008, which was partly the result of very intelligent financial engineers who nonetheless fell under a “grand delusion”.  Third, “Okay, but in the grand scheme of things is there any real harm done?”  Baggott’s answer is that the “integrity of the scientific enterprise” is being harmed.  This is where he trots out Brian Greene and Leonard Susskind apparently defending a post-empirical redefinition of the scientific method.  Fourth, “What do the philosophers have to say about it?”  Baggott cites only a commentary by philosophers Cartwright and Frigg (2007), but otherwise would like to hear more from philosophers.  Baggott states that “the guardianship of science and the scientific method should not be left solely in the hands of scientists, particularly those scientists with intellectual agendas of their own.”  Fifth, “Are we witnessing the end of physics?”  Baggott cites Horgan (1996) but offers that the list of unanswered questions in physics is still quite lengthy.  The real problem is impatience, which Baggott feels is a factor driving the development of fairy-tale physics.  The final question is “So, what do you want me to do about it?”  Baggott’s answer is to maintain a healthy skepticism when reading about contemporary physics.

So, what to make of the book?  Baggott focuses on particle physics, cosmology, and quantum information theory.  He makes no reference at all to the largest field in physics, condensed matter, not to mention all the other subfields of physics.  Smolin (2006) does the same but at least explains that he does; Baggott never explains that there are vast areas of physics untouched by the “fairy tale” issue he rants about.  Baggott also fails to explore the sociological reasons why “fairy tale” physics persists, an issue that Smolin (2006) does address in some detail.  Thus in comparing the two books, Baggott tackles a broader set of issues (whereas Smolin is mainly concerned about string theory) but Smolin gives a much more thorough account of his topic.

Personally I think Baggott is mostly right (though his dismissal of SUSY due to lack of evidence is far too premature).  However I think Smolin does a better job of convincing us that fairy-tale physics is actually damaging—how funding and hiring is being dominated by less than worthy theoretical efforts.  Baggott is clearly ticked off, but is not articulate enough about the damage and why we should care.  I am not as prepared to completely dismiss string theory as Baggott and Smolin are, but I certainly agree that in their current form they offer little in the way of scientific progress.  Nonetheless, it’s about time someone wrote a book like Baggott’s.

References




Fred Adams and Greg Laughlin, 1999:  The Five Ages of the Universe:  Inside the Physics of Eternity.  Free Press.

Nancy Cartwright and Roman Frigg, 2007:  String theory under scrutiny.  Physics World, Sept. 2007, p. 15.

John Horgan, 1997:  The End of Science:  Facing the Limits of Knowledge in the Twilight of the Scientific Age.  Little, Brown.

Lee Smolin, 2006:  The Trouble with Physics:  The Rise of String Theory, the Fall of a Science and What Comes Next.  Penguin.

Peter Woit, 2007:  Not Even Wrong:  The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics.  Vintage.