no code implementations • 26 Sep 2023 • Francesca Biagini, Lukas Gonon, Niklas Walter
We derive quantitative error bounds for deep neural networks (DNNs) approximating option prices on a $d$-dimensional risky asset as functions of the underlying model parameters, payoff parameters and initial conditions.
no code implementations • 24 Jul 2023 • Lukas Gonon, Antoine Jacquier
Universal approximation theorems are the foundations of classical neural networks, providing theoretical guarantees that the latter are able to approximate maps of interest.
no code implementations • 2 Apr 2023 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
The reservoir architectures used for the approximation and estimation of elements in the new class are randomly generated echo state networks with either linear or ReLU activation functions.
no code implementations • 19 Jan 2023 • Lukas Gonon, Robin Graeber, Arnulf Jentzen
In particular, it is a key contribution of this work to reveal that for all $a, b\in\mathbb{R}$ with $b-a\geq 7$ we have that the functions $[a, b]^d\ni x=(x_1,\dots, x_d)\mapsto\prod_{i=1}^d x_i\in\mathbb{R}$ for $d\in\mathbb{N}$ as well as the functions $[a, b]^d\ni x =(x_1,\dots, x_d)\mapsto\sin(\prod_{i=1}^d x_i) \in \mathbb{R} $ for $ d \in \mathbb{N} $ can neither be approximated without the curse of dimensionality by means of shallow ANNs nor insufficiently deep ANNs with ReLU activation but can be approximated without the curse of dimensionality by sufficiently deep ANNs with ReLU activation.
1 code implementation • 30 Dec 2022 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
A universal kernel is constructed whose sections approximate any causal and time-invariant filter in the fading memory category with inputs and outputs in a finite-dimensional Euclidean space.
no code implementations • 19 Oct 2022 • Lukas Gonon
This article studies deep neural network expression rates for optimal stopping problems of discrete-time Markov processes on high-dimensional state spaces.
no code implementations • 4 Oct 2022 • Francesca Biagini, Lukas Gonon, Andrea Mazzon, Thilo Meyer-Brandis
In this paper we employ deep learning techniques to detect financial asset bubbles by using observed call option prices.
no code implementations • NeurIPS Workshop DLDE 2021 • Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann
We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension.
no code implementations • 29 Jul 2021 • Francesca Biagini, Lukas Gonon, Thomas Reitsam
First we prove that the $\alpha$-quantile hedging price converges to the superhedging price at time $0$ for $\alpha$ tending to $1$, and show that the $\alpha$-quantile hedging price can be approximated by a neural network-based price.
no code implementations • 14 Jun 2021 • Lukas Gonon
We derive bounds for the prediction error of random neural networks for learning sufficiently non-degenerate Black-Scholes type models.
no code implementations • 23 Feb 2021 • Lukas Gonon, Christoph Schwab
Deep neural networks (DNNs) with ReLU activation function are proved to be able to express viscosity solutions of linear partial integrodifferental equations (PIDEs) on state spaces of possibly high dimension $d$.
Numerical Analysis Numerical Analysis Probability
no code implementations • 22 Oct 2020 • Lukas Gonon, Juan-Pablo Ortega
Echo state networks (ESNs) have been recently proved to be universal approximants for input/output systems with respect to various $L ^p$-type criteria.
no code implementations • 17 Sep 2020 • Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann
A new explanation of geometric nature of the reservoir computing phenomenon is presented.
no code implementations • 3 Jul 2020 • Aritz Bercher, Lukas Gonon, Arnulf Jentzen, Diyora Salimova
In applications one is often not only interested in the size of the error with respect to the objective function but also in the size of the error with respect to a test function which is possibly different from the objective function.
no code implementations • 22 Apr 2020 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
The notion of memory capacity, originally introduced for echo state and linear networks with independent inputs, is generalized to nonlinear recurrent networks with stationary but dependent inputs.
no code implementations • 14 Feb 2020 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights.
no code implementations • 20 Nov 2019 • Lukas Gonon, Philipp Grohs, Arnulf Jentzen, David Kofler, David Šiška
These mathematical results from the scientific literature prove in part that algorithms based on ANNs are capable of overcoming the curse of dimensionality in the numerical approximation of high-dimensional PDEs.
no code implementations • 30 Oct 2019 • Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
We analyze the practices of reservoir computing in the framework of statistical learning theory.
no code implementations • 7 Jul 2018 • Lukas Gonon, Juan-Pablo Ortega
The universal approximation properties with respect to $L ^p $-type criteria of three important families of reservoir computers with stochastic discrete-time semi-infinite inputs is shown.
3 code implementations • 8 Feb 2018 • Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood
We present a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, market impact, liquidity constraints or risk limits using modern deep reinforcement machine learning methods.
Computational Finance Numerical Analysis Optimization and Control Probability Risk Management 91G60, 65K99