An Experiment on Network Density and Sequential Learning
We conduct a sequential social-learning experiment where subjects each guess a hidden state based on private signals and the guesses of a subset of their predecessors. A network determines the observable predecessors, and we compare subjects' accuracy on sparse and dense networks. Accuracy gains from social learning are twice as large on sparse networks compared to dense networks. Models of naive inference where agents ignore correlation between observations predict this comparative static in network density, while the finding is difficult to reconcile with rational-learning models.
PDF Abstract