Linguistics Speaker Series: Mark Steedman

Mark Steedman, University of Edinburgh

Bootstrapping Language Acquisition

Recent work with Abend, Kwiatkowski, Smith, and Goldwater (2016) has shown that a general-purpose program for inducing parsers incrementally from sequences of paired strings (in any language) and meanings (in any convenient language of logical form) can be applied to real English child-directed utterance from the CHILDES corpus to successfully learn the child's ("Eve's") grammar, combining lexical and syntactic learning in a single pass through the data.

While the earliest stages of learning necessarily proceed by pure "semantic bootstrapping", building a probabilistic model of all possible pairings of all possible words and derivations with all possible decompositions of logical form, the later stages of learning show emergent effects of "syntactic bootstrapping" (Gleitman 1990), where the program's increasing knowledge of the grammar of the language allows it to identify the syntactic type and meaning of unseen words in one trial, as has been shown to be characteristic of real children in experiments with nonce-word learning. The concluding section of the talk considers the extension of the learner to a more realistic semantics including information structure and conversational dynamics.

Bio: Mark Steedman is Professor of Cognitive Science in the School of Informatics at the University of Edinburgh. Previously, he taught as Professor in the Department of Computer and Information Science at the University of Pennsylvania, which he joined as Associate Professor in 1988. His PhD in Artificial Intelligence is from the University of Edinburgh. He is a Fellow of the Association for the Advancement of Artificial Intelligence, the British Academy, the Royal Society of Edinburgh, the Association for Computational Linguistics, and the Cognitive Science Society, and a Member of the European Academy. His research interests cover issues in computational linguistics, artificial intelligence, computer science and cognitive science, including syntax and semantics of natural language, wide-coverage parsing and open-domain question-answering, comprehension of natural language discourse by humans and by machine, grammar-based language modeling, natural language generation, and the semantics of intonation in spoken discourse. Much of his current NLP research is addressed to probabilistic parsing and robust semantics for question-answering using the CCG grammar formalism, including the acquisition of language from paired sentences and meanings by child and machine. He sometimes works with colleagues in computer animation using these theories to guide the graphical animation of speaking virtual or simulated autonomous human agents, for which he recently shared the 2017 IFAAMAS Influential Paper Award for a 1994 paper with Justine Cassell and others. Some of his research concerns the analysis of music by humans and machines.

Friday, February 23 at 3:30pm

Poulton Hall, 230
1421 37th St., N.W., Washington

Departments

Georgetown College, Linguistics

Event Contact Name

Conor Sinclair

Subscribe
Google Calendar iCal Outlook

Recent Activity