Ryan Hope and Sean Barton, RPI Graduate Student

 

Ryan Hope and Sean Barton, RPI Graduate Student

Sage 4101

December 2, 2015 12:00 PM - 1:30 PM

Sean Barton:

Abstract:

Walking over naturalistic terrain requires tight coordination between the visual system and the motor system. This coordination is heavily dependent on the biomechanical structure of the body. Previous work in our lab has demonstrated that when stepping onto a target foothold, information about the upcoming ground plane is most useful when it is present during phase of the gait cycle that permit adjustment of the pendular mechanics associated with the lower limbs. That is, visual information is must useful for accurate foot placement when it is available during the last half of the preceding step. What is particularly interesting about this finding is the idea that how perceptual information is used during continuous locomotion is heavily dependent on the biomechanical structure of the body.

 In this talk I will be expanding on this notion by raising the question of how the structure of the body dictates how perceptual information is used in more complex walking contexts. Responding to changes in the environment, making choices about stepping trajectory, and navigating challenging terrain all make larger demands on the perceptual system. Maintaining the energetic efficiency and stability simple over-ground walking requires using perceptual information manner commensurate with the pendulum-like structure of the limbs. I will also present data from a first experiment in this vein, exploring how walkers respond to sudden perturbations in foot placement in the context of continuous walking.

 Ryan Hope: 

Abstract:

A growing body of research on oculomotor control now suggests that humans have much less control over their eye movements than once assumed. High-speed eye trackers have revealed that the eyes are in constant motion, even when fixating, and that these fixational eye movements are possibly functional. The growing consensus is that eye movements are initiated automatically and rhythmically by a trigger from a low-level brainstem oscillator (aka. a saccade timer). If this is true, then how does a system like this lead to the sense that humans can voluntarily move their eyes to whatever they want, whenever they want? The CRISP model of saccade generation (which models the saccade timer as a random walk process) proposes two mechanisms for control. According to the CRISP model, cognitive processes can affect the rate of the saccade timer as well as probabilistically cancel ongoing saccade programming depending on the stage of programming. The present work tests whether these two mechanisms are sufficient for capturing the complex eye movement behavior observed when performing the antisaccade task, a task specifically designed to elicit eye movements that are incompatible with top-down goals. We hypothesize that these two mechanism will not be sufficient to reproduce human behavior in the antisaccade task and propose a spatial components needed. In order to test this hypothesis, the parameter space of the baseline CRISP model as well as two variant models (one with a bottom-up saliency driven saccade target map and one with a top-down attention driven saccade target map) were explored to see if distributions of saccade measures could be generated that were consistent with those generated by humans performing the antisaccade task. The top-down spatial model was able to replicate the distribution of first saccade latencies on correct prosaccade trials as well as incorrect antisaccade trials.

 

HD VIDEO LINK

Add to calendar
Share|