Accelerating Saccadic Response through Spatial and Temporal Cross-Modal Misalignments

Accelerating Saccadic Response through Spatial and Temporal Cross-Modal Misalignments

Daniel Jiménez Navarro, Xi Peng, Yunxiang Zhang, Karol Myszkowski, Hans-Peter Seidel, Qi Sun, Ana Serrano
SIGGRAPH 2024

All Research

Abstract

Human senses and perception are our mechanisms to interact with and explore the external world. In this context, visual saccades –rapid and coordinated eye movements– serve as a primary tool for awareness of our surroundings. Typically, our perception is not limited to visual stimuli alone but is enriched by cross-modal interactions, such as the combination of sight and hearing. In this work, we investigate the temporal and spatial relationship of these interactions, focusing on how auditory cues that precede visual stimuli influence saccadic latency –the time that it takes for the eyes to react and start moving towards a visual target. Our research, conducted within a virtual reality environment, reveals that auditory cues preceding visual information can significantly accelerate saccadic responses, but this effect plateaus beyond certain temporal thresholds. Additionally, while the spatial positioning of visual stimuli influences the speed of these eye movements, as reported in previous research, we find that the location of auditory cues with respect to their corresponding visual stimulus does not have a comparable effect. These findings have a wide range of practical implications. To validate our findings, we implement a sports training task within a more realistic setting, involving complex audiovisual signals. Then, we discuss a number of applications and showcase how our findings can be used for latency compensation for fairness in online video games.