Machine learning (ML) algorithms, that is algorithms that learn directly from data, are now commonly used to solve problems in Mechanics and Physics. Often, they are applied in conjunction with other “traditional” algorithms used to solve the equations of continuum physics. The convergence of these two seemingly different types of algorithms has led to blended techniques that retain the advantages of both. It has also led to a better understanding and design of ML algorithms through insights gained by considering their continuum limits. In this talk, I will describe our recent work along these two directions.
First, I will describe the solution of inverse problems with quantifiable uncertainty using Bayesian inference. In this context, we utilize deep Generative Adversarial Networks (GANs) to learn the prior distribution of the parameters to be inferred. The use of GANs allows us to learn the prior distribution purely from samples without ad-hoc assumptions. It also maps the inverse problem into the smaller-dimensional latent space of the GAN, thereby reducing its computational complexity. The mechanics/physics associated with the problem lives in the likelihood term, which contains the forward operator for the problem. I will describe the application of this approach to a wide range of inverse problems ranging from heat conduction, elasticity imaging, computed tomography and the inference of microstructure.
Next, I will describe techniques of unsupervised and semi-supervised learning that are driven by the spectral properties of the Graph Laplacian. For specific scalings, the large data limit of the Graph Laplacian has been shown to converge to a continuous density-weighted Laplacian. We consider the discretization of this operator using the finite element method and demonstrate how this discretization may be used to understand the properties of the eigenspectrum of this operator and to develop numerical methods to solve clustering and few-shot labeling problems.