no code implementations • 9 Oct 2023 • Maximilian Seitzer, Sjoerd van Steenkiste, Thomas Kipf, Klaus Greff, Mehdi S. M. Sajjadi
Our Dynamic Scene Transformer (DyST) model leverages recent work in neural scene representation to learn a latent decomposition of monocular real-world videos into scene content, per-view scene dynamics, and camera pose.
1 code implementation • NeurIPS 2023 • Andrii Zadaianchuk, Maximilian Seitzer, Georg Martius
Recently, it was shown that the reconstruction of pre-trained self-supervised features leads to object-centric representations on unconstrained real-world image datasets.
3 code implementations • 29 Sep 2022 • Maximilian Seitzer, Max Horn, Andrii Zadaianchuk, Dominik Zietlow, Tianjun Xiao, Carl-Johann Simon-Gabriel, Tong He, Zheng Zhang, Bernhard Schölkopf, Thomas Brox, Francesco Locatello
Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world.
2 code implementations • ICLR 2022 • Maximilian Seitzer, Arash Tavakoli, Dimitrije Antic, Georg Martius
In this work, we examine this approach and identify potential hazards associated with the use of log-likelihood in conjunction with gradient-based optimizers.
2 code implementations • NeurIPS 2021 • Maximilian Seitzer, Bernhard Schölkopf, Georg Martius
Many reinforcement learning (RL) environments consist of independent entities that interact sparsely.
1 code implementation • ICLR 2021 • Andrii Zadaianchuk, Maximilian Seitzer, Georg Martius
We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills.
1 code implementation • 27 Feb 2020 • Maximilian Seitzer, Andreas Foltyn, Felix P. Kemeth
This report to our stage 2 submission to the NeurIPS 2019 disentanglement challenge presents a simple image preprocessing method for learning disentangled latent factors.
1 code implementation • 23 Feb 2020 • Maximilian Seitzer
This report to our stage 1 submission to the NeurIPS 2019 disentanglement challenge presents a simple image preprocessing method for training VAEs leading to improved disentanglement compared to directly using the images.
1 code implementation • 28 Jun 2018 • Maximilian Seitzer, Guang Yang, Jo Schlemper, Ozan Oktay, Tobias Würfl, Vincent Christlein, Tom Wong, Raad Mohiaddin, David Firmin, Jennifer Keegan, Daniel Rueckert, Andreas Maier
In addition, we introduce a semantic interpretability score, measuring the visibility of the region of interest in both ground truth and reconstructed images, which allows us to objectively quantify the usefulness of the image quality for image post-processing and analysis.