My last post resurrected some Eos articles from a few years ago, but today I'd like to discuss a piece in this month's issue. Maskey et al. write about "A Data Systems Perspective on Advancing AI", reporting on a NASA-sponsored workshop held in January. They describe "traditional" Earth science modeling as "top down", starting with first principles (laws of physics), while the machine learning approach is "bottom up", having algorithms that learn relationships empirically from historical data. An inherent limitation of empirical modeling, they recognize correctly, is the inability for a model trained on historical data to extrapolate into regimes never before seen in the training data. Yet this is precisely what Earth science is called to do, when dealing with extreme weather events or climate change, for example. The writers propose that "physically aware machine learning models" could overcome this limitation, suggesting a melding of the "top down" and "bottom up" approaches. The authors mainly write about using physics to constrain the machine learning models or their cost functions during training, claiming promising results already. It is less clear to me that placing constraints on an empirical model would allow it to creditably extrapolate, only that such constraints should improve interpolation capability.
On this blog, it was noted previously that there have been demonstrated cases of deep neural networks actually being capable of generalizing beyond the training data, though such cases are not well understood, and are not convincing unless validated in independent data. From the context, it did not seem like these deep learning models were of the "physically aware" variety that Maskey et al. describe.
These are early days in the efforts to apply machine learning to physical problems. We still have much to learn about what is possible, and what remains limited, with such efforts.
No comments:
Post a Comment