Learning transition states: approximation, sampling, and
optimization with rare data
Grant Rostkoff, CIMS
Abstract:
The surprising flexibility and undeniable empirical success
of machine learning algorithms has inspired many theoretical
explanations for the efficacy of neural networks. Here, I will briefly
introduce one perspective that provides not only asymptotic guarantees
of trainability and accuracy in high-dimensional learning problems, but
also provides some prescriptions and design principles for learning.
Bolstered by the favorable scaling of these algorithms in high
dimensional problems, I will turn to a central problem in computational
condensed matter physics---that of computing reaction pathways. From
the perspective of an applied mathematician, these problems typically
appear hopeless; they are not only high-dimensional, but also dominated
by rare events. However, with neural networks in the toolkit, at least
the dimensionality is somewhat less intimidating. I will describe an
algorithm that combines stochastic gradient descent with importance
sampling to optimize a function representation of a reaction pathway
for an arbitrary system. Finally, I will provide numerical evidence of
the power and limitations of this approach.