no code implementations • 20 Jan 2023 • James A. Michaelov, Seana Coulson, Benjamin K. Bergen
Context changes expectations about upcoming words - following a story involving an anthropomorphic peanut, comprehenders expect the sentence the peanut was in love more than the peanut was salted, as indexed by N400 amplitude (Nieuwland & van Berkum, 2006).
no code implementations • 2 Sep 2021 • James A. Michaelov, Seana Coulson, Benjamin K. Bergen
In this study, we investigate whether the linguistic predictions of computational language models or humans better reflect the way in which natural language stimuli modulate the amplitude of the N400.
no code implementations • 20 Jul 2021 • James A. Michaelov, Megan D. Bardolph, Seana Coulson, Benjamin K. Bergen
Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks.