My goal as a researcher is to uncover the mathematical foundations of Monte Carlo methods
in order to optimize their efficiency.
In pursuit of this goal, I combine numerical analysis of existing Monte Carlo methods
with development of new methods that improve accuracy.
To find out more, check out my curriculum vitae
or take a look at the projects below.
You can also reach me at rw2515 [AT] nyu [DOT] edu.
Projects
In quantum mechanics and the analysis of Markov processes,
Monte Carlo methods are needed to identify low-lying eigenfunctions of dynamical generators.
The standard Monte Carlo approaches for identifying eigenfunctions,
however, can be inaccurate or slow to converge.
What limits the efficiency of the
currently available spectral estimation methods,
and what is needed to build more efficient methods for the future?
Error bounds for dynamical spectral estimation[SIAM Journal on Mathematics of Data Science, 2021][PDF] I proved the convergence of the leading spectral estimation method in biochemistry,
called the "variational approach to conformational dynamics" (VAC), and
derived detailed error bounds.
Integrated VAC:
A robust strategy for identifying eigenfunctions of dynamical operators[Journal of Physical Chemistry B, 2020][PDF] In this follow-up work, we extended VAC to make it more robust
for applications with limited data and flexible neural network approximation spaces.
Beyond walkers in stochastic quantum chemistry:
Reducing error using fast randomized iteration[Journal of Chemical Theory and Computation, 2019][PDF] I helped design a Monte Carlo scheme for computing the ground-state wavefunction
of small molecules that produced efficiency gains of up to a thousand
compared to traditional approaches.
Improved fast randomized iteration approach to full configuration interaction[Journal of Chemical Theory and Computation, 2020][PDF] We recently wrote this second paper which further improved the efficiency
of the Monte Carlo scheme.
Approximating matrix eigenvalues by randomized subspace iteration[arXiv, 2021][PDF] I helped design a Monte Carlo scheme for computing multiple dominant eigenvectors of a high-dimensional matrix.
Figure: Spectral estimation error approaches zero as the number
of basis functions increases and the length of the time series increases.
Monte Carlo methods are needed to identify probabilities of rare, impactful events.
However, to estimate a rare probability \(p\) with just a single digit of accuracy,
direct Monte Carlo sampling requires a very large sample size (\(> 100p^{-1}\)).
Since generating such a large sample can be prohibitively computationally expensive,
are there more practical rare event sampling methods
that calculate rare probabilities with reduced sample size requirements?
A splitting method to reduce MCMC variance [arXiv, 2020][PDF] I proved that the weighted ensemble method can reduce MCMC's variance by multiple orders of magnitude when calculating the probability of rare events.
Practical rare event sampling for extreme mesoscale weather [Chaos, 2019][PDF] I invented an efficient new rare event sampling algorithm and applied
the algorithm to study intense tropical cyclones.
Unifying sequential Monte Carlo with resampling matrices[arXiv, 2019][PDF] I investigated the mathematical properties of resampling schemes,
which are an essential ingredient in rare event sampling.
Figure: Rare event "splitting" method is used to sample
rare, high values of the position \(x\).
White circles indicate that samples are killed.
Black circles indicate that samples are preserved and possibly replicated.
In many Bayesian inverse problems, Markov chain Monte Carlo (MCMC)
methods are needed to approximate distributions on infinite-dimensional function spaces.
Yet designing efficient and broadly applicable MCMC methods for function spaces
has proved challenging.
The available samplers are either inefficient, or they
require gradient or covariance information that can be challenging to obtain.
This raises the question: are there functional samplers that perform efficiently
without requiring gradients or posterior covariance estimates?
Ensemble sampler for infinite-dimensional inverse problems[Statistics and Computing, 2021][PDF]
We developed a new sampler for infinite-dimensional inverse problems that outperforms other gradient-free methods.