Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices

Lei Zhang*, Lukas Lengersdorff, Nace Mikus, Jan Gläscher, Claus Lamm

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

27 Downloads (Pure)

Abstract

The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla–Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.
Original languageEnglish
Pages (from-to)695-707
Number of pages13
JournalSocial Cognitive and Affective Neuroscience
Volume15
Issue number6
DOIs
Publication statusPublished - Jun 2020

Keywords

  • 501021 Social psychology
  • 501021 Sozialpsychologie
  • 501030 Cognitive science
  • 501030 Kognitionswissenschaft
  • 101028 Mathematical modelling
  • 101028 Mathematische Modellierung
  • BEHAVIOR
  • DOPAMINE
  • FMRI
  • HUMAN STRIATUM
  • MECHANISMS
  • PREDICTION ERRORS
  • REPRESENTATIONS
  • REWARD
  • ROLES
  • SEROTONIN
  • computational modeling
  • learning rate
  • model-based fMRI
  • prediction error
  • reinforcement learning
  • social decision-making
  • Computational modeling
  • Learning rate
  • Prediction error
  • Reinforcement learning
  • Social decision-making
  • Model-based fMRI

Fingerprint

Dive into the research topics of 'Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices'. Together they form a unique fingerprint.

Cite this