Adaptive behavior with stable synapses

10 Apr 2024  ·  Cristiano Capone, Luca Falorsi, Maurizio Mattia ·

Behavioral changes in animals and humans, as a consequence of an error or a verbal instruction, can be extremely rapid. Improvement in behavioral performances are usually associated in machine learning and reinforcement learning to synaptic plasticity, and, in general, to changes and optimization of network parameters. However, such rapid changes are not coherent with the timescales of synaptic plasticity, suggesting that the mechanism responsible for that could be a dynamical network reconfiguration. In the last few years, similar capabilities have been observed in transformers, foundational architecture in the field of machine learning that are widely used in applications such as natural language and image processing. Transformers are capable of in-context learning, the ability to adapt and acquire new information dynamically within the context of the task or environment they are currently engaged in, without the need for significant changes to their underlying parameters. Building upon the notion of something unique within transformers enabling the emergence of this property, we claim that it could also be supported by input segregation and dendritic amplification, features extensively observed in biological networks. We propose an architecture composed of gain-modulated recurrent networks that excels at in-context learning, showing abilities inaccessible to standard networks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here