Petr Babkin

 

Petr Babkin

Sage 4101

November 2, 2016 12:00 PM - 1:30 PM

Despite the growing success in the design of algorithms for individual NLP tasks, their synthesis for end-to-end language understanding remains a daunting problem (Liddy & Hovy, 2006). Among the main reasons is the associated qualitative increase in complexity because a) linguistic phenomena interact in hardly tractable ways that cannot be accounted for by trivially stacking disparate models (Jackendoff, 2007), and b) small local errors accrue substantially on the end-to-end scale, resulting in a dramatic drop in the effective accuracy of such systems (Caselli et al., 2015). A brute force approach is to build ever more complex and computationally demanding models that explicitly account for the synergy and reduce the effect of error propagation. A more cognitively motivated alternative is 1) to employ robust models that can attain an acceptable solution in the face of potentially incomplete and erroneous input; and 2) to organize processing as an incremental process of building partial understanding up until the point when it is deemed good enough for the current purpose (Nirenburg & McShane, 2015). This talk will focus on the technical aspects of the work to be presented in my upcoming candidacy defense talk.

 

Add to calendar
Share|