1 code implementation • 22 Mar 2024 • Florian Krach, Josef Teichmann, Hanna Wutte
Lastly, we uncover that our generative approach for learning optimal, (non-) robust investments under trading costs generates universally applicable alternatives to well known asymptotic strategies of idealized settings.
1 code implementation • 27 Jul 2023 • Josef Teichmann, Hanna Wutte
These approaches prove to be successful for pricing the passport option in one-dimensional and multi-dimensional uncorrelated BS markets.
no code implementations • 20 Mar 2023 • Jakob Heiss, Josef Teichmann, Hanna Wutte
Randomized neural networks (randomized NNs), where only the terminal layer's weights are optimized constitute a powerful model class to reduce computational time in training the neural network model.
1 code implementation • 31 Dec 2021 • Jakob Heiss, Josef Teichmann, Hanna Wutte
In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs).
1 code implementation • 26 Feb 2021 • Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann
To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data.
no code implementations • 3 Feb 2021 • Nicolas Curin, Michael Kettler, Xi Kleisinger-Yu, Vlatka Komaric, Thomas Krabichler, Josef Teichmann, Hanna Wutte
To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon.
1 code implementation • 7 Nov 2019 • Jakob Heiss, Josef Teichmann, Hanna Wutte
In this paper, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained.