no code implementations • 20 Dec 2023 • Shubhangi Ghosh, Luigi Gresele, Julius von Kügelgen, Michel Besserve, Bernhard Schölkopf
As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures.
no code implementations • 30 Nov 2023 • Armin Kekić, Bernhard Schölkopf, Michel Besserve
Why does a phenomenon occur?
1 code implementation • NeurIPS 2023 • Liang Wendong, Armin Kekić, Julius von Kügelgen, Simon Buchholz, Michel Besserve, Luigi Gresele, Bernhard Schölkopf
As a corollary, this interventional perspective also leads to new identifiability results for nonlinear ICA -- a special case of CauCA with an empty graph -- requiring strictly fewer datasets than previous results.
no code implementations • 15 Sep 2022 • Kaidi Shao, Nikos K. Logothetis, Michel Besserve
Transient phenomena play a key role in coordinating brain activity at multiple scales, however, their underlying mechanisms remain largely unknown.
no code implementations • 12 Aug 2022 • Simon Buchholz, Michel Besserve, Bernhard Schölkopf
Several families of spurious solutions fitting perfectly the data, but that do not correspond to the ground truth factors can be constructed in generic settings.
1 code implementation • 25 Jul 2022 • Hamza Keurti, Hsiao-Ru Pan, Michel Besserve, Benjamin F. Grewe, Bernhard Schölkopf
How can agents learn internal models that veridically represent interactions with the real world is a largely open question.
1 code implementation • 6 Jun 2022 • Patrik Reizinger, Luigi Gresele, Jack Brady, Julius von Kügelgen, Dominik Zietlow, Bernhard Schölkopf, Georg Martius, Wieland Brendel, Michel Besserve
Leveraging self-consistency, we show that the ELBO converges to a regularized log-likelihood.
no code implementations • 29 Apr 2022 • Kaidi Shao, Nikos K. Logothetis, Michel Besserve
Transient recurring phenomena are ubiquitous in many scientific fields like neuroscience and meteorology.
no code implementations • 14 Feb 2022 • Shubhangi Ghosh, Luigi Gresele, Julius von Kügelgen, Michel Besserve, Bernhard Schölkopf
Model identifiability is a desirable property in the context of unsupervised representation learning.
1 code implementation • 10 Dec 2021 • Michel Besserve, Bernhard Schölkopf
Complex systems often contain feedback loops that can be described as cyclic causal models.
no code implementations • 29 Oct 2021 • Michel Besserve, Naji Shajarisales, Dominik Janzing, Bernhard Schölkopf
A new perspective has been provided based on the principle of Independence of Causal Mechanisms (ICM), leading to the Spectral Independence Criterion (SIC), postulating that the power spectral density (PSD) of the cause time series is uncorrelated with the squared modulus of the frequency response of the filter generating the effect.
1 code implementation • 30 Jun 2021 • Felix Leeb, Stefan Bauer, Michel Besserve, Bernhard Schölkopf
Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent space, making them a staple of representation learning methods.
1 code implementation • NeurIPS 2021 • Luigi Gresele, Julius von Kügelgen, Vincent Stimper, Bernhard Schölkopf, Michel Besserve
Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process.
1 code implementation • NeurIPS 2021 • Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello
A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant.
Ranked #1 on Image Classification on Causal3DIdent
no code implementations • 3 Dec 2020 • Michel Besserve, Simon Buchholz, Bernhard Schölkopf
Large-scale testing is considered key to assess the state of the current COVID-19 pandemic.
Applications Populations and Evolution
no code implementations • 12 Oct 2020 • Daniel Chicharro, Michel Besserve, Stefano Panzeri
Using these statistics we formulate new additional rules of causal orientation that provide causal information not obtainable from standard structure learning algorithms, which exploit only conditional independencies between observable variables.
no code implementations • 6 Jul 2020 • Ashkan Soleymani, Anant Raj, Stefan Bauer, Bernhard Schölkopf, Michel Besserve
The problem of inferring the direct causal parents of a response variable among a large set of explanatory variables is of high practical importance in many disciplines.
no code implementations • 14 Jun 2020 • Felix Leeb, Guilia Lanzillotta, Yashas Annadani, Michel Besserve, Stefan Bauer, Bernhard Schölkopf
We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling.
no code implementations • 1 Apr 2020 • Michel Besserve, Rémy Sun, Dominik Janzing, Bernhard Schölkopf
Generative models can be trained to emulate complex empirical data, but are they useful to make predictions in the context of previously unobserved environments?
no code implementations • ICLR 2019 • Michel Besserve, Remy Sun, Bernhard Schoelkopf
Deep generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) are important tools to capture and investigate the properties of complex empirical data.
no code implementations • 6 Mar 2019 • Anant Raj, Luigi Gresele, Michel Besserve, Bernhard Schölkopf, Stefan Bauer
The problem of inferring the direct causal parents of a response variable among a large set of explanatory variables is of high practical importance in many disciplines.
no code implementations • ICLR 2020 • Michel Besserve, Arash Mehrjou, Rémy Sun, Bernhard Schölkopf
Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data.
no code implementations • 16 Mar 2018 • Philipp Geiger, Michel Besserve, Justus Winkelmann, Claudius Proissl, Bernhard Schölkopf
We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.
no code implementations • ICLR 2018 • Michel Besserve, Dominik Janzing, Bernhard Schoelkopf
Generative models are important tools to capture and investigate the properties of complex empirical data.
no code implementations • 5 May 2017 • Michel Besserve, Naji Shajarisales, Bernhard Schölkopf, Dominik Janzing
The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms.
no code implementations • 4 Mar 2015 • Naji Shajarisales, Dominik Janzing, Bernhard Shoelkopf, Michel Besserve
Assuming the effect is generated by the cause trough a linear system, we propose a new approach based on the hypothesis that nature chooses the "cause" and the "mechanism that generates the effect from the cause" independent of each other.
no code implementations • NeurIPS 2013 • Michel Besserve, Nikos K. Logothetis, Bernhard Schölkopf
This framework enables us to develop an independence test between time series as well as a similarity measure to compare different types of coupling.
no code implementations • NeurIPS 2012 • David Balduzzi, Michel Besserve
This paper suggests a learning-theoretic perspective on how synaptic plasticity benefits global brain functioning.