CS Colloquium: Adam Lopez (Edinburgh)

What do neural networks learn about language?

Neural network models have redefined the state-of-the-art in many areas of natural language processing. Much of this success is attributed to their ability to learn representations of their input, and this has invited bold claims that these representations encode important semantic, syntactic, and morphological properties of language. For example, when one research group recently suggested that "prior information regarding morphology ... among others, should be incorporated" into neural models, a prominent deep learning group retorted that it is "unnecessary to consider these prior information" when using neural networks. In this talk I’ll try to tease apart the hype from the reality, focusing on two questions: what do character-level neural models really learn about morphology? And what do LSTMs learn about negation?

This is work with Clara Vania, Federico Fancellu, Yova Kementchedjhieva, Andreas Grivas, and Bonnie Webber.

Bio: Adam Lopez is a Reader in the School of Informatics at the University of Edinburgh ("Reader" is a peculiar British title meaning "Associate Professor"). His research group develops computational models of natural language learning, understanding and generation in people and machines, and their research focuses on basic scientific, mathematical, and engineering problems related to these models. He's especially interested in models that handle diverse linguistic phenomena across languages.

Monday, June 18 at 11:00am

St. Mary's Hall, 326
3700 Reservoir Road, N.W., Washington


Georgetown College, Computer Science



Event Contact Name

Jesse Bailey

Google Calendar iCal Outlook

Recent Activity