David Del Rey Fernandez, NASA-Langley
Romit Maulik, Argonne National Laboratory
Machine learning (ML) methods and in particular deep learning via artificial neural networks have generated significant interest in the scientific community over the last few years. Increasingly, ML techniques are being applied across abroad spectrum of scientific and engineering fields. In the context of numerically solving partial differential equations (PDEs), there is growing interest in applying ML techniques across the solver software stack. One area where ML may be transformational is related to the efficient utilization of production-level solvers. The latter depends on the careful choice of numerous parameters, for example for time marching, added dissipation, physical models (e.g., turbulence models), and in the associated solvers for linear and nonlinear systems of equations. The efficient deployment of such software then depends in part on accumulated engineering expertise to select appropriate parameter values. Automating the selection of these tunable parameters is potentially well suited for ML algorithms. Similarly, ML can be used to accelerate numerous aspects of the solution process, for example in predicting a refined mesh in an h/p-refinement process, or a suitable numerical initial condition that can enhance the solver process. Moreover, ML can also provide surrogates for computationally intensive tasks such as error estimation and uncertainty quantification.Therefore, the purpose of this mini-symposium is to bring together a broad spectrum of researchers interested in the application of ML techniques to accelerating high-fidelity PDE solvers as well as enhancing the resulting outputs.Possible topics of interest include (but are not limited to) ML for to: mesh adaptation, error estimation, nonlinear solvers, system optimization, inverse problems, and surrogate modelling.