1 code implementation • 21 Feb 2024 • Lukas Gruber, Markus Holzleitner, Johannes Lehner, Sepp Hochreiter, Werner Zellinger
Estimating the ratio of two probability densities from finitely many samples, is a central task in machine learning and statistics.
1 code implementation • 19 Feb 2024 • Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter
This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar.
no code implementations • 17 Feb 2023 • Bernhard Schäfl, Lukas Gruber, Johannes Brandstetter, Sepp Hochreiter
Graph neural networks (GNNs) have evolved into one of the most popular deep learning architectures.
no code implementations • 19 Oct 2022 • Paul-Edouard Sarlin, Mihai Dusmanu, Johannes L. Schönberger, Pablo Speciale, Lukas Gruber, Viktor Larsson, Ondrej Miksik, Marc Pollefeys
To close this gap, we introduce LaMAR, a new benchmark with a comprehensive capture and GT pipeline that co-registers realistic trajectories and sensor streams captured by heterogeneous AR devices in large, unconstrained scenes.
1 code implementation • 1 Jun 2022 • Bernhard Schäfl, Lukas Gruber, Angela Bitto-Nemling, Sepp Hochreiter
In experiments on small-sized tabular datasets with less than 1, 000 samples, Hopular surpasses Gradient Boosting, Random Forests, SVMs, and in particular several Deep Learning methods.
Ranked #1 on General Classification on Shrutime
no code implementations • 2 Dec 2020 • Markus Holzleitner, Lukas Gruber, José Arjona-Medina, Johannes Brandstetter, Sepp Hochreiter
We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms, which simultaneously learn a policy function, the actor, and a value function, the critic.
2 code implementations • ICLR 2021 • Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Thomas Adler, Lukas Gruber, Markus Holzleitner, Milena Pavlović, Geir Kjetil Sandve, Victor Greiff, David Kreil, Michael Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter
The new update rule is equivalent to the attention mechanism used in transformers.
Immune Repertoire Classification Multiple Instance Learning +1
1 code implementation • NeurIPS 2020 • Michael Widrich, Bernhard Schäfl, Hubert Ramsauer, Milena Pavlović, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter, Geir Kjetil Sandve, Victor Greiff, Sepp Hochreiter, Günter Klambauer
We show that the attention mechanism of transformer architectures is actually the update rule of modern Hopfield networks that can store exponentially many patterns.
no code implementations • 6 May 2015 • Clemens Arth, Raphael Grasset, Lukas Gruber, Tobias Langlotz, Alessandro Mulloni, Daniel Wagner
This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014.
Human-Computer Interaction