We present a new algorithm for learning unknown governing equations from trajectory data, using a family of neural networks. Given samples of solutions to an unknown dynamical system we approximate the forcing term using a family of neural networks. We express the equation in integral form and use Euler method to predict the solution at every successive time step using at each iteration a different neural network as a prior for f. This procedure yields M-1 time-independent networks, where M is the number of time steps at which the solution is observed. Finally, we obtain a single function which approximates the forcing term by neural network interpolation. Unlike our earlier work, where we numerically computed the derivatives of data, and used them as target in a Lipschitz regularized neural network to approximate f, our new method avoids numerical differentiations, which are unstable in presence of noise. We test the new algorithm on multiple examples in a high-noise setting. We empirically show that generalization and recovery of the governing equation improve by adding a Lipschitz regularization term in our loss function and that this method improves our previous one especially in the high-noise regime, when numerical differentiation provides low quality target data. Finally, we compare our results with other state of the art methods for system identification.
Negrini E., Citti G., Capogna L. (2023). Robust Neural Network Approach to System Identification in the High-Noise Regime. Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-44505-7_12].
Robust Neural Network Approach to System Identification in the High-Noise Regime
Citti G.;
2023
Abstract
We present a new algorithm for learning unknown governing equations from trajectory data, using a family of neural networks. Given samples of solutions to an unknown dynamical system we approximate the forcing term using a family of neural networks. We express the equation in integral form and use Euler method to predict the solution at every successive time step using at each iteration a different neural network as a prior for f. This procedure yields M-1 time-independent networks, where M is the number of time steps at which the solution is observed. Finally, we obtain a single function which approximates the forcing term by neural network interpolation. Unlike our earlier work, where we numerically computed the derivatives of data, and used them as target in a Lipschitz regularized neural network to approximate f, our new method avoids numerical differentiations, which are unstable in presence of noise. We test the new algorithm on multiple examples in a high-noise setting. We empirically show that generalization and recovery of the governing equation improve by adding a Lipschitz regularization term in our loss function and that this method improves our previous one especially in the high-noise regime, when numerical differentiation provides low quality target data. Finally, we compare our results with other state of the art methods for system identification.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.