This two-day workshop will feature ten speakers that celebrate the far-reaching impact in applied mathematics of the work of both Charlie Epstein and Leslie Greengard, which includes fast algorithms, PDEs and integral equations, medical imaging, numerical analysis, population genetics, and data analysis. Registration for the workshop is open to all and includes poster sessions, a banquet, and plenty of informal interaction time. Full travel/housing support will be provided on a competitive basis for up to twenty-five early-career researchers (graduate students, post-docs, pre-tenure faculty)
Please stay tuned for information regarding the schedule, confirmed speakers, hotel accommodations, and more.
This workshop is graciously sponsored by The Simons Foundation, Yale University, and the Office of Naval Research. Thank-you also to Manas Rachh, to Yale Conferences and Events (Megan Palluzzi and her team), and to the Yale Mathematics Department staff (Diane Altschuler and Karen Kavanaugh) for their help.
Poster sessions: Here is a PDF file listing the presenters, titles and abstracts, and spatio-temporal coordinates.
Title: Longitudinal brain mapping in the natural history and treatment of neurodegenerative disease
Abstract: I will review my recent efforts to translate academic brain mapping practices to the biotech setting. Specific examples will connect cognitive change, MRI modalities and biomarkers of amyloid and tau pathology within a consistent analytical framework.
Title: Leslie and the Fusionauts
Abstract: Magnetic confinement fusion research focuses on finding robust physics and engineering methods to address the following three intertwined challenges simultaneously: 1) Efficiently heating the plasma and driving a high current through it using radio-frequency waves; 2) Ensuring the macroscopic stability of the confined plasma at high pressure; 3) Reducing particle and heat diffusion associated with small scale turbulence. Remarkably, Leslie Greengard has recently made significant contributions to all three topics, relying on the fact that many of the questions associated with these three challenges can be expressed in terms of elliptic PDEs. The methods and solvers he developed for fusion and plasma physics will be the subject of this presentation.
Title: An efficient direct solver with high-order accurate discretization and its applications
Abstract: Access to an accurate and efficient elliptic PDE solver often limits what problems can physical phenomena can be modeled computationally. The recently developed Hierarchical Poincaré-Steklov (HPS) method designed to address this need. The HPS method is a high order discretization technique that comes with a nested dissection inspired direct solver. In this talk, we will review the HPS method and the corresponding direct solver. Recent developments, including the integration of the HPS method to inverse scattering and the adaptive version of the method, will be presented. Numerical results will illustrate the performance of the method for a variety of experiments.
Title: Fast summation algorithms for decaying exponentials arising in MRI simulations
Abstract: We present simple, accurate methods for evaluating decaying complex exponential sums. Such problems arise in a variety of applications, such as the numerical simulation of received MRI signals. The proposed method is based on the interpolative decomposition of real decaying exponentials combined with the discrete Fourier transform. The interpolation basis of received signals, the interpolation nodes for time sampling, and the signal scaling factors can all be precomputed, yielding a fast scheme involving a small number of the discrete Fourier transforms. Numerical experiments show that speedup is achieved over direct simulation of MRI images of size 128x128 or larger. We will then show how to incorporate this scheme into the non-uniform Fast Fourier Transform (nuFFT) environment using complex-valued spreading functions and compare both approaches. This is joint work with Andrew Dienstfrey.
Title: Guarantees for Manifolds intersecting Large Pieces of a Data Set
Abstract: Let X be a (large) collection of points in an arbitrarily high dimensional Euclidean space. If this set has any "reasonable" structure Y for a large piece of X, one would want Y to be contained in a manifold with strong restrictions. The result we present is joint work with Gilad Lerman and Raanan Schul. Let N be an integer. Then there is a David Semmes "manifold" M (strictly speaking a varifold), whose "size depends only on the choice of N, containing a "large number of points". We will explain the meaning of all these statements in our talk. The upshot is that the estimates depend only on the integer N, and not on the ambient dimension of the data set. The proof uses multiscale SVD arguments and the algorithm has very fast running time.
Title: Fast Methods for the Laplace-Beltrami Equation on the Sphere
Abstract: Fast integral equation methods for solving the Laplace-Beltrami equation on the unit sphere in the presence of multiple "islands" are presented. One approach involves mapping the surface of the sphere to a multiply-connected region in the complex plane via a stereographic projection. Discretizing the integral equation in the complex plane results in a linear system whose solution can be accelerated using the fast multipole method for the 2D Coulomb potential. More recently, a fast direct solver has been developed with the intent to study point vortex motion on the sphere in the presence of geography. Investigating such problems requires solving a linear system with multiple right hand sides, an application for which fast direct methods are ideal. Several numerical examples are given to demonstrate the performance of both approaches.
Title: Density-based clustering: How to avoid kernel density estimation
Abstract: A limitation of many clustering algorithms is the requirement to tune adjustable parameters for each application or even for each dataset. Some techniques require an a priori estimate of the number of clusters while density-based techniques usually require a scale parameter. The kernel density method is the standard approach for obtaining a continuous empirical probability density function from discrete samples. But it is much like forming a histogram: how to choose the bin size? In this presentation we describe non-parametric methods for clustering discrete data samples without parameters of scale and with a minimum of other adjustable parameters.
Title: On the Solution of Elliptic Partial Differential Equations on Regions with Corners
Abstract: The solution of elliptic partial differential equations on regions with non-smooth boundaries (edges, corners, etc.) is a notoriously refractory problem. In this talk, I observe that when the problems are formulated as boundary integral equations of classical potential theory, the solutions (of the integral equations) in the vicinity of corners are representable by series of elementary functions. In addition to being analytically perspicuous, the resulting expressions lend themselves to the construction of accurate and efficient Nystrom discretizations of the associated boundary integral equations. The results are illustrated by a number of numerical examples.
Title: Recent results on total variation minimization
Abstract: We will discuss recent results on total variation minimization. We will discuss a range of topics, including choice of regularization parameter, sampling strategies in the setting of subsampled Fourier measurements, and fast algorithms for reconstruction.
Title: Spectral Exponential Time Differncing Schemes and Singular Sturm-Liouville Operators
Abstract: Many problems in fluid mechanics, plasma physics, population genetics and Riemannian geometry involve degenerate diffusion operators that make the equations stiff, along with nonlinear terms that are difficult to treat implicitly. We describe a new approach to generating high-order Expenential Time Differencing schemes for such equations, and give several examples in which singular Sturm-Liouville theory reveals the structure of the diffusion operator needed to implement the method.