Petr Babkin, Graduate Student, RPI

 

Petr Babkin, Graduate Student, RPI

Sage 4101

March 23, 2016 12:00 PM - 1:30 PM

 

Recreating human-level natural language understanding (NLU) in computer systems requires deep processing and a massive amount of knowledge. Currently prevalent knowledge-lean NLU methods make small incremental advancements but are far from certain to be able to ever attain human-level facility. Knowledge-based approaches suffer from the lack of resources for massive knowledge acquisition. As a result, current knowledge-based analyzers are brittle – too many inputs require for their processing knowledge that the systems do not have. One way out of this conundrum is to concentrate on acquiring more knowledge. But while that work is ongoing, there are ways of optimizing NLU even if it relies on currently available knowledge bases.  A methodological orientation at modeling human performance (not only human results) helps to find ways to alleviating errors and uncertainty in NLU. Our hypothesis is based on two observations: a) the prevalence of interruptions and completions in dialog indicates that people often halt processing dialog turns before they are fully perceived, let alone processed; and b) if and when in the process of analysis the language-endowed intelligent agent (LEIA) decides that it knows what it needs to do in response to the input, further language input processing may be abandoned. This hypothesis aligns well with the principles of bounded optimality and least effort known to underlie a lot of human behavior. It can also be viewed as a model of human behavior in the face of the scarcity of attentional resources. Work is ongoing in the LEIA Lab under an ONR grant on developing an explanatory theory of this selective processing and operationalizing it in a computational LEIA system. This research will form the basis of my dissertation work.

Add to calendar
Share|