Suboptimal and trait-like reinforcement learning strategies correlate with midbrain encoding of prediction errors

8 Dec 2021  ·  Liran Szlak, Kristoffer Aberg, Rony Paz ·

During probabilistic learning organisms often apply a sub-optimal "probability-matching" strategy, where selection rates match reward probabilities, rather than engaging in the optimal "maximization" strategy, where the option with the highest reward probability is always selected. Despite decades of research, the mechanisms contributing to probability-matching are still under debate, and particularly noteworthy is that no differences between probability-matching and maximization strategies have been reported at the level of the brain. Here, we provide theoretical proof for a computational model that explains the complete range of behaviors between pure maximization and pure probability-matching. Fitting this model to behavior of 60 participants performing a probabilistic reinforcement learning task during fMRI scanning confirmed the model-derived prediction that probability-matching relates to an increased integration of negative outcomes during learning, as indicated by a stronger coupling between midbrain BOLD signal and negative prediction errors. Because the degree of probability-matching was consistent within an individual across nine different conditions, our results further suggest that the tendency to express a particular learning strategy is a trait-like feature of an individual.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here