no code implementations • 29 Dec 2022 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
This paper proposes using multimodal generative models for semi-supervised learning in the instruction following tasks.
no code implementations • 6 Dec 2021 • Kei Akuzawa, Kotaro Onishi, Keisuke Takiguchi, Kohki Mametani, Koichiro Mori
Variational autoencoder-based voice conversion (VAE-VC) has the advantage of requiring only pairs of speeches and speaker labels for training.
no code implementations • 14 May 2021 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience.
no code implementations • 1 Jan 2021 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
However, by analyzing the sequential VAEs from the information theoretic perspective, we can claim that simply maximizing the MI encourages the latent variables to have redundant information and prevents the disentanglement of global and local features.
no code implementations • 25 Sep 2019 • Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo
An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor.
no code implementations • 29 Apr 2019 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
However, previous domain-invariance-based methods overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and domain invariance.
no code implementations • ICLR Workshop LLD 2019 • Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo
An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier.
no code implementations • ICLR Workshop LLD 2019 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Learning domain-invariant representation is a dominant approach for domain generalization.
no code implementations • 27 Sep 2018 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Learning domain-invariant representation is a dominant approach for domain generalization, where we need to build a classifier that is robust toward domain shifts induced by change of users, acoustic or lighting conditions, etc.
no code implementations • 6 Apr 2018 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS).
no code implementations • ICLR 2018 • Shohei Ohsawa, Kei Akuzawa, Tatsuya Matsushima, Gustavo Bezerra, Yusuke Iwasawa, Hiroshi Kajino, Seiya Takenaka, Yutaka Matsuo
Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments.