Human Visual Search as a Deep Reinforcement Learning Solution to a POMDP

MOTIVATION

Can we derive and understand user intention from their eye movement behaviour?

WHY SHOULD I CARE?

For any kind of adaptive and assistive systems we need to understand user intention. An empirically verified cognitive model should help to constrain the inverse problem of identifying an individual’s objectives from their behaviour and thereby help build systems that attempt to determine what a user is trying do.

ABSTRACT

When people search for a target in a novel image they often make use of eye movements to bring the relatively high acuity fovea to bear on areas of interest. The strategies that control these eye movements for visual search have been of substantial scientific interest. In the current article we report a new computational model that shows how strategies for visual search are an emergent consequence of perceptual/motor constraints and approximately optimal strategies. The model solves a Partially Observable Markov Decision Process (POMDP) using deep Q-learning to acquire strategies that optimise the tradeoff between speed and accuracy. Results are reported for the Distractor-ratio task.

Aditya Acharya
Aditya Acharya
Postdoctoral Researcher in Human-Computer Interaction

My research interests include reinforcement learning, decision making and modelling.