Dan Veksler Grad Student

 

Dan Veksler Grad Student

Sage 4101

February 25, 2009 12:00 PM - 2:00 PM

Abstract:

In an attempt to model specific tasks, knowledge is often hardcoded in the form of cognitive models. Though useful and necessary, knowledge engineering presents a daunting challenge for tasks of relative complexity. Most of human knowledge is implicit, and varies across individuals. Adaptive Autonomous Agents are suggested as a means of avoiding the problem of hardcoding human knowledge, and focusing on the development of computational agents that automatically adapt to different tasks in a stimulus-rich environment. Two mechanisms are compared for advancing autonomy in the ACT-R cognitive architecture: Reinforcement Learning (RL) and Goal-Proximity Decision-making (GPD). Results from a human experiment suggest that GPD, which is based on associative learning, can account for human data where RL cannot (prior to reward).

Add to calendar
Share|