PINNACLE: PINN Adaptive ColLocation and Experimental Points Selection

As we all know, deep learning has been applied with great success to domains with massive datasets, such as computer vision (CV) or natural language processing (NLP). This leads to an important question of whether it can similarly be applied to scientific and engineering domains, where typically there is limited data due to high costs of running experiments or simulations. 

Fortunately, these domains also tend to have strong inductive biases: for example some form of symmetry and conservation laws, or equations describing the dynamics of the system. Incorporation of such strong inductive biases can compensate for limited data to enable successful deep learning application. 

One such inductive bias are partial differential equations (PDEs), which applies many scientific and industrial settings. Examples ranges from fluid dynamics, to molecular science in the quantum realm, and to epidemiology, such as the modeling of how diseases like covid spread.


Physics Informed Neural Networks, or PINNs, incorporate PDEs as soft constraints or regularization terms in the loss function, resulting in multiple terms that can be interpreted as penalties associated with multiple training point types. We have experimental points (EXP), which are training points from data collected via experiments or simulations, and also various types of collocation points (CL), which are chosen during training to enforce each specified PDE and initial or boundary conditions.

However, while PINNs have been successfully applied to some problems, they are in general challenging to train. Each point type has different training dynamics, making it hard to train overall loss. Furthermore, training also requires large numbers of EXP and CL points, both which leads to high costs.

This naturally leads to a question: Can we jointly select all types of training points in order to improve the training of PINNs? 

In our work, we showed that the answer is yes. We did so by theoretically analysing PINN training dynamics using what is known as the empirical Neural Tangent Kernel, and developed an automatic, adaptive point selection method, PINNACLE, based on it.



PINNACLE is able significantly outperform benchmarks (existing point selection methods of training PINNs) for a wide variety of tasks, such as:

  • Forward problems, where we are provided the specific PDE and initial/boundary conditions, and our task is to learn the PDE solution u, given a fixed CL points budget and no EXP points
  • Inverse problems, where we are provided the form of the PDE, but have unknown PDE parameters that we need to learn from EXP data. 
  • Transfer learning problems, where we already have trained a PINN for a given initial condition, but may need to adapt it for perturbed initial condition.

Surprisingly, we also found that PINNACLE’s automatic point selections are surprisingly interpretable and similar to past works’ heuristics that require manual user input. For example, it automatically adjust the proportion of PDE and ICBC points selected, and focuses on more informative regions, which changes as training progresses.

In conclusion, we proposed a novel algorithm, PINNACLE, that jointly optimizes the selection of all training point types for PINNs for more efficient training, while automatically adjusting the proportion of collocation point types as training progresses. For details, please check out our ICLR 2024 paper here.

Author