1 code implementation • 22 Mar 2024 • Florian Krach, Josef Teichmann, Hanna Wutte
Lastly, we uncover that our generative approach for learning optimal, (non-) robust investments under trading costs generates universally applicable alternatives to well known asymptotic strategies of idealized settings.
1 code implementation • 8 Sep 2023 • Xuwei Yang, Anastasis Kratsios, Florian Krach, Matheus Grasselli, Aurelien Lucchi
We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets ${\cal D}_1,\dots,{\cal D}_N$ for the same learning model $f_{\theta}$.
1 code implementation • 24 Jul 2023 • William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann
The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations.
1 code implementation • 28 Jun 2022 • Florian Krach, Marc Nübel, Josef Teichmann
This paper studies the problem of forecasting general stochastic processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework \citep{herrera2021neural}.
2 code implementations • 28 Apr 2021 • Calypso Herrera, Florian Krach, Pierre Ruyssen, Josef Teichmann
This paper presents the benefits of using randomized neural networks instead of standard basis functions or deep neural networks to approximate the solutions of optimal stopping problems.
2 code implementations • ICLR 2021 • Calypso Herrera, Florian Krach, Josef Teichmann
We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process.
1 code implementation • 28 Apr 2020 • Calypso Herrera, Florian Krach, Anastasis Kratsios, Pierre Ruyssen, Josef Teichmann
The robust PCA of covariance matrices plays an essential role when isolating key explanatory features.
no code implementations • 27 Apr 2020 • Calypso Herrera, Florian Krach, Josef Teichmann
The Lipschitz constant is an important quantity that arises in analysing the convergence of gradient-based optimization methods.