no code implementations • 15 Apr 2024 • Doron Haviv, Russell Zhang Kunes, Thomas Dougherty, Cassandra Burdziak, Tal Nawy, Anna Gilbert, Dana Pe'er
Along with an encoder that maps distributions to embeddings, Wasserstein Wormhole includes a decoder that maps embeddings back to distributions, allowing for operations in the embedding space to generalize to OT spaces, such as Wasserstein barycenter estimation and OT interpolation.
no code implementations • 12 Aug 2022 • Russell Z. Kunes, Mingzhang Yin, Max Land, Doron Haviv, Dana Pe'er, Simon Tavaré
Gradient estimation is often necessary for fitting generative models with discrete latent variables, in contexts such as reinforcement learning and variational autoencoder (VAE) training.
4 code implementations • 29 Jun 2019 • Gal Raayoni, Shahar Gottlieb, George Pisha, Yoav Harris, Yahel Manor, Uri Mendlovic, Doron Haviv, Yaron Hadad, Ido Kaminer
Fundamental mathematical constants like $e$ and $\pi$ are ubiquitous in diverse fields of science, from abstract mathematics to physics, biology and chemistry.
no code implementations • ICLR 2019 • Doron Haviv, Alexander Rivkind, Omri Barak
To be effective in sequential data processing, Recurrent Neural Networks (RNNs) are required to keep track of past events by creating memories.
no code implementations • ICLR 2019 • Adar Elad, Doron Haviv, Yochai Blau, Tomer Michaeli
The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization).
2 code implementations • 19 Feb 2019 • Doron Haviv, Alexander Rivkind, Omri Barak
Finally, we propose a novel regularization technique that is based on the relation between hidden state speeds and memory longevity.