h-Analysis and data-parallel physics-informed neural networks

Paul Escapil-Inchauspé, Gonzalo A. Ruz

Producción científica: Contribución a una revistaArtículorevisión exhaustiva


We explore the data-parallel acceleration of physics-informed machine learning (PIML) schemes, with a focus on physics-informed neural networks (PINNs) for multiple graphics processing units (GPUs) architectures. In order to develop scale-robust and high-throughput PIML models for sophisticated applications which may require a large number of training points (e.g., involving complex and high-dimensional domains, non-linear operators or multi-physics), we detail a novel protocol based on h-analysis and data-parallel acceleration through the Horovod training framework. The protocol is backed by new convergence bounds for the generalization error and the train-test gap. We show that the acceleration is straightforward to implement, does not compromise training, and proves to be highly efficient and controllable, paving the way towards generic scale-robust PIML. Extensive numerical experiments with increasing complexity illustrate its robustness and consistency, offering a wide range of possibilities for real-world simulations.

Idioma originalInglés
Número de artículo17562
PublicaciónScientific Reports
EstadoPublicada - dic. 2023
Publicado de forma externa


Profundice en los temas de investigación de 'h-Analysis and data-parallel physics-informed neural networks'. En conjunto forman una huella única.

Citar esto