Issues in CogSci: Gabriel Diaz, Rochester Institute of Technology


Issues in CogSci: Gabriel Diaz, Rochester Institute of Technology

Sage 4101

March 19, 2014 12:00 PM - 1:30 PM

Although it is well accepted that humans rely on visual prediction to overcome sensory-motor delays, little is known about the underlying mechanisms. Previously, I investigated visual prediction in the context of a virtual-reality ball interception task. When preparing to intercept a ball in flight, humans make predictive saccades ahead of the ball’s current position and to a location along the ball’s future trajectory. Such visual prediction is ubiquitous amongst novices, extremely accurate, and can reach at least 400ms into the future. In this continuation, I shift my focus to the role of these predictive eye-movements in guiding the simultaneous catching movement of the hand. Subjects viewing a virtual reality environment through a head-mounted display are asked to intercept an approaching virtual ball shortly after its bounce upon the ground. Because the ball approaches very quickly, subjects adopt a strategy of moving their hand close to the predicted location of the ball’s arrival early on, while the ball is still approaching the bounce-point. Using computational modeling techniques, we can generate predictions of the ideal hand-position at the time of the bounce as a combination of pre-bounce visual information and the predicted final arrival height as indicated by subject’s predictive eye-movements.  By changing the way that our model combines this information, we can differentiate between prediction behavior that reflects a central tendency, suggests a learned mapping between hand position and pre-bounce kinematics, or behavior that indicates reliance on a Bayesian prior.


Download paper here. 

Add to calendar