Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large-Scale Text Corpora

Applying machine learning algorithms to large-scale, text-based corpora (embeddings) presents a unique opportunity to investigate at scale how human semantic knowledge is organized and how people use it to judge fundamental relationships, such as similarity between concepts. However, efforts to date have shown a substantial discrepancy between algorithm predictions and empirical judgments. Here, we introduce a novel approach of generating embeddings motivated by the psychological theory that semantic context plays a critical role in human judgments. Specifically, we train state-of-the-art machine learning algorithms using contextually-constrained text corpora and show that this greatly improves predictions of similarity judgments and feature ratings. By improving the correspondence between representations derived using embeddings generated by machine learning methods and empirical measurements of human judgments, the approach we describe helps advance the use of large-scale text corpora to understand the structure of human semantic representations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here