Abstract
Computational models are greatly useful in cognitive science in revealing the mechanisms of learning and decision making. However, it is hard to know whether all meaningful variance in behavior has been account for by the best-fit model selected through model comparison. In this work, we propose to use recurrent neural networks (RNNs) to assess the limits of pre- dictability afforded by a model of behavior, and reveal what (if anything) is missing in the cognitive models. We apply this ap- proach in a complex reward-learning task with a large choice space and rich individual variability. The RNN models outper- form the best known cognitive model through the entire learn- ing phase. By analyzing and comparing model predictions, we show that the RNN models are more accurate at capturing the temporal dependency between subsequent choices, and better at identifying the subspace in the space of choices where par- ticipants’ behavior is more likely to reside. The RNNs can also capture individual differences across participants by uti- lizing an embedding. The usefulness of this approach suggests promising applications of using RNNs to predict human be- havior in complex cognitive tasks, in order to reveal cognitive mechanisms and their variability.