Just-in-time: gaze guidance behavior while action planning and execution in VR

Abstract

Eye movements in the natural environment have primarily been studied for over-learned everyday activities such as tea-making, sandwich making, driving that have a fixed sequence of actions associated with them. These studies indicate a just-in-time strategy of fixations i.e. the fixation provides the information for a particular action immediately precedes that action. However, it is unclear if this strategy is also in play when the task is novel and a sequence of actions must be planned in the moment. To study attention mechanisms in a novel task in a natural environment, we recorded gaze and body movement data in a virtual environment while subjects performed a sorting task where they sorted objects based on object features on a life-size shelf. To study the action planning and execution related gaze guidance behavior we also controlled the complexity of the sorting task by introducing EASY and HARD tasks. We show that subjects are close to optimal while performing EASY trials and are more sub-optimal while performing HARD tasks. Based on the scan-paths as well as latency of first fixations on the task-related ROIs during action planning and execution we show that subjects use a just-in-time strategy of fixating on the task relevant objects. From our findings, we can conclude that subjects use the just-in-time strategy in a way that sacrifices optimality by offloading cognitive task demands on the environment. These findings also lend further support to the embodied cognitive framework of cognitive processing in natural environments.

Publication
preprint on “bioRxiv”