no code implementations • 14 Mar 2023 • Pierre-Jean Meyer
The literature on reachability analysis methods for neural networks currently only focuses on uncertainties on the network's inputs.
1 code implementation • 15 Nov 2021 • Pierre-Jean Meyer
Unlike other tools in the literature focusing on small classes of piecewise-affine or monotone activation functions, the main strength of our approach is its generality: it can handle neural networks with any Lipschitz-continuous activation function.
1 code implementation • 21 Nov 2019 • Pierre-Jean Meyer, Murat Arcak
Then we exploit these bounds and the evaluation of the first-order sensitivity matrices at a few sampled initial states to obtain an over-approximation of the first-order sensitivity, which is in turn used to over-approximate the reachable set of the initial system.