Hierarchical Bayesian Models of Reinforcement Learning: Introduction and comparison to alternative methods

Published: Oct. 20, 2020, 6:02 a.m.

Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.19.345512v1?rss=1 Authors: van Geen, C., Gerraty, R. T. Abstract: Reinforcement learning models have been used extensively and with great success to capture learning and decision-making processes in humans and other organisms. One essential goal of these computational models is generalization to new sets of observations. Extracting parameters that can reliably predict out-of-sample data can be difficult, however: reinforcement learning models often face problems of non-identifiability, which can lead to poor predictive accuracy. The use of prior distributions to regularize parameter estimates can be an effective way to remedy this issue. While previous research has suggested that empirical priors estimated from a separate dataset improve identifiability and predictive accuracy, this paper outlines an alternate method for the derivation of empirical priors: hierarchical Bayesian modeling. We provide a detailed introduction to this method, and show that using hierarchical models to simultaneously extract and impose empirical priors leads to better out-of-sample prediction while being more data efficient. Copy rights belong to original authors. Visit the link for more info