1 code implementation • 19 Apr 2024 • Jing Cheng, Ruigang Wang, Ian R. Manchester
We take a recently proposed Polyak Lojasiewicz network (PLNet) as an Lyapunov function and then parameterize the vector field as the descent directions of the Lyapunov function.
no code implementations • 2 Feb 2024 • Ruigang Wang, Krishnamurthy Dvijotham, Ian R. Manchester
This paper presents a new \emph{bi-Lipschitz} invertible neural network, the BiLipNet, which has the ability to control both its \emph{Lipschitzness} (output sensitivity to input perturbations) and \emph{inverse Lipschitzness} (input distinguishability from different outputs).
1 code implementation • 22 Jun 2023 • Nicholas H. Barbara, Max Revay, Ruigang Wang, Jing Cheng, Ian R. Manchester
Neural networks are typically sensitive to small input perturbations, leading to unexpected or brittle behaviour.
1 code implementation • 12 Apr 2023 • Nicholas H. Barbara, Ruigang Wang, Ian R. Manchester
This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems.
no code implementations • 4 Apr 2023 • Chris Verhoek, Ruigang Wang, Roland Tóth
This paper presents two direct parameterizations of stable and robust linear parameter-varying state-space (LPV-SS) models.
1 code implementation • 20 Mar 2023 • Patricia Pauli, Ruigang Wang, Ian R. Manchester, Frank Allgöwer
We establish a layer-wise parameterization for 1D convolutional neural networks (CNNs) with built-in end-to-end robustness guarantees.
2 code implementations • 27 Jan 2023 • Ruigang Wang, Ian R. Manchester
This paper introduces a new parameterization of deep neural networks (both fully-connected and convolutional) with guaranteed $\ell^2$ Lipschitz bounds, i. e. limited sensitivity to input perturbations.
no code implementations • 8 Dec 2021 • Ruigang Wang, Nicholas H. Barbara, Max Revay, Ian R. Manchester
This paper proposes a nonlinear policy architecture for control of partially-observed linear dynamical systems providing built-in closed-loop stability guarantees.
no code implementations • 2 Dec 2021 • Ruigang Wang, Ian R. Manchester
This paper presents a parameterization of nonlinear controllers for uncertain systems building on a recently developed neural network architecture, called the recurrent equilibrium network (REN), and a nonlinear version of the Youla parameterization.
no code implementations • 1 Oct 2021 • Ian R. Manchester, Max Revay, Ruigang Wang
This tutorial paper provides an introduction to recently developed tools for machine learning, especially learning dynamical systems (system identification), with stability and robustness constraints.
1 code implementation • 13 Apr 2021 • Max Revay, Ruigang Wang, Ian R. Manchester
RENs are otherwise very flexible: they can represent all stable linear systems, all previously-known sets of contracting recurrent neural networks and echo state networks, all deep feedforward neural networks, and all stable Wiener/Hammerstein models, and can approximate all fading-memory and contracting nonlinear systems.
no code implementations • 11 Apr 2021 • Ruigang Wang, Patrick J. W. Koelwijn, Ian R. Manchester, Roland Tóth
In this paper, we present a virtual control contraction metric (VCCM) based nonlinear parameter-varying (NPV) approach to design a state-feedback controller for a control moment gyroscope (CMG) to track a user-defined trajectory set.
no code implementations • 1 Jan 2021 • Max Revay, Ruigang Wang, Ian Manchester
In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
no code implementations • 5 Oct 2020 • Max Revay, Ruigang Wang, Ian R. Manchester
In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
no code implementations • 11 Apr 2020 • Max Revay, Ruigang Wang, Ian R. Manchester
Recurrent neural networks (RNNs) are a class of nonlinear dynamical systems often used to model sequence-to-sequence maps.
no code implementations • 18 Mar 2020 • Ruigang Wang, Roland Tóth, Patrick J. W. Koelwijn, Ian R. Manchester
This paper presents a systematic approach to nonlinear state-feedback control design that has three main advantages: (i) it ensures exponential stability and $ \mathcal{L}_2 $-gain performance with respect to a user-defined set of reference trajectories, and (ii) it provides constructive conditions based on convex optimization and a path-integral-based control realization, and (iii) it is less restrictive than previous similar approaches.