EXPLAINING FEATURES OF SIMPLE HUMAN DECISIONS USING BAYESIAN NEURAL NETWORKS
Abstract
Feedforward neural networks exhibit excellent object recognition performance and currently provide the best models of biological vision. However, despite their remarkable performance in recognizing unseen images, their decision behavior differs markedly from human decision-making. Standard feedforward neural networks perform an identical number of computations to process a given stimulus and always land on the same response for that stimulus. Human decisions, in contrast, take variable amount of time and are stochastic (i.e., the same stimulus elicits different reaction time, RT, and sometimes different responses on different trials). Here we develop a new neural network, RTNet, that closely approximates all basic features of perceptual decision making. RTNet has noisy weights and processes the same stimulus multiple times until the accumulated evidence reaches a threshold, thus producing both variable RT and stochastic decisions. In addition, RTNet exhibits several features of human perceptual decision-making including speed-accuracy tradeoff, right-skewed RT distributions, lower accuracy and confidence for harder decisions, etc. Finally, data from 60 human subjects on a digit discrimination task demonstrates that RT, accuracy, and confidence produced by RTNet for individual novel images correlate with the same quantities produced by human subjects. Overall, RTNet is the first neural network that exhibits all basic signatures of perceptual decision making.