no code implementations • 11 Apr 2024 • Yuwei Sun, Ippei Fujisawa, Arthur Juliani, Jun Sakuma, Ryota Kanai
Neural networks encounter the challenge of Catastrophic Forgetting (CF) in continual learning, where new task learning interferes with previously learned knowledge.
no code implementations • 29 Feb 2024 • Masaru Kuwabara, Ryota Kanai
In individuals afflicted with conditions such as paralysis, the implementation of Brain-Computer-Interface (BCI) has begun to significantly impact their quality of life.
1 code implementation • 22 Sep 2023 • Yuwei Sun, Hideya Ochiai, Zhirong Wu, Stephen Lin, Ryota Kanai
Existing studies such as the Coordination method employ iterative cross-attention mechanisms with a bottleneck to enable the sparse association of inputs.
no code implementations • 17 Aug 2023 • Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgebel, Jonathan Simon, Rufin VanRullen
From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties.
1 code implementation • 14 Nov 2022 • Ippei Fujisawa, Ryota Kanai
Furthermore, we discuss the relevance of logical tasks to concepts such as extrapolation, explainability, and inductive bias.
no code implementations • 24 Mar 2022 • Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, Ryota Kanai
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence.
no code implementations • 26 Feb 2022 • Hiro Taiyo Hamada, Ryota Kanai
While social relationships within groups are a critical factor for wellbeing, the development of wellbeing AI for social interactions remains relatively scarce.
no code implementations • 14 Jul 2021 • Francesco Massari, Martin Biehl, Lisa Meeden, Ryota Kanai
A possible countermeasure is to endow RL agents with an intrinsic reward function, or 'intrinsic motivation', which rewards the agent based on certain features of the current sensor state.
no code implementations • 4 Dec 2020 • Rufin VanRullen, Ryota Kanai
Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.
no code implementations • 5 Oct 2020 • Martin Biehl, Ryota Kanai
On the other hand we attempt to establish a connection between a quantity that is a feature of the interpretation of the hyperparameter as a model, namely the information gain, and the one-step pointwise NTIC which is a quantity that does not depend on this interpretation.
no code implementations • 12 Jan 2020 • Martin Biehl, Felix A. Pollock, Ryota Kanai
Additionally, we highlight that the variational densities presented in newer formulations of the free energy principle and lemma are parameterised by different variables than in older works, leading to a substantially different interpretation of the theory.
no code implementations • 18 Jun 2018 • Ildefons Magrans de Abril, Ryota Kanai
Curiosity reward informs the agent about the relevance of a recent agent action, whereas empowerment is implemented as the opposite information flow from the agent to the environment that quantifies the agent's potential of controlling its own future.
1 code implementation • 5 Jun 2018 • Yen Yu, Acer Y. C. Chang, Ryota Kanai
This paper presents the Homeo-Heterostatic Value Gradients (HHVG) algorithm as a formal account on the constructive interplay between boredom and curiosity which gives rise to effective exploration and superior forward model learning.
1 code implementation • 1 Jun 2018 • Nicholas Guttenberg, Martin Biehl, Nathaniel Virgo, Ryota Kanai
We investigate the use of attentional neural network layers in order to learn a `behavior characterization' which can be used to drive novelty search and curiosity-based policies.
1 code implementation • 30 Mar 2018 • Nicholas Guttenberg, Ryota Kanai
We train a network to generate mappings between training sets and classification policies (a 'classifier generator') by conditioning on the entire training set via an attentional mechanism.
no code implementations • 23 Jan 2018 • Ildefons Magrans de Abril, Ryota Kanai
We propose a curiosity reward based on information theory principles and consistent with the animal instinct to maintain certain critical parameters within a bounded range.
no code implementations • 19 Dec 2017 • Jun Kitazono, Ryota Kanai, Masafumi Oizumi
In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of $\Phi$ by evaluating the accuracy of the algorithm in simulated data and real neural data.
1 code implementation • 15 Aug 2017 • Nicholas Guttenberg, Martin Biehl, Ryota Kanai
Controlling embodied agents with many actuated degrees of freedom is a challenging task.
no code implementations • 28 Feb 2017 • Hiromitsu Mizutani, Ryota Kanai
Here we report two types of compression ratio based on two ways to quantify the description length of data after compression.
2 code implementations • 22 Feb 2017 • Nicholas Guttenberg, Yen Yu, Ryota Kanai
In this method, the problem of action selection is reduced to one of gradient descent on the latent space of the generative model, with the model itself providing the means of evaluating outcomes and finding the gradient, much like how the reward network in Deep Q-Networks (DQN) provides gradient information for the action generator.
2 code implementations • 14 Dec 2016 • Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, Ryota Kanai
The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain.
no code implementations • 1 Sep 2016 • Nicholas Guttenberg, Martin Biehl, Ryota Kanai
We present a loss function for neural networks that encompasses an idea of trivial versus non-trivial predictions, such that the network jointly determines its own prediction goals and learns to satisfy them.