EventsBack to calendar
Tal Linzen: Virtual Colloquium
Title: What inductive biases enable human-like syntactic generalization?
Abstract: Humans apply their knowledge of syntax in a systematic way to constructions that are rare or absent in their linguistic input. This observation, traditionally discussed under the banner of the poverty of the stimulus, has motivated the assumption that humans are innately endowed with inductive biases that make crucial reference to syntactic structure. The recent applied success of deep learning systems that are not designed on the basis of such biases may appear to call this assumption into question; in practice, however, such engineering success speaks to this question in an indirect way at best, as engineering benchmarks do not test whether the system in fact generalizes as humans do. In this talk, I will use established psycholinguistic paradigms to examine the syntactic generalization capabilities of contemporary neural network architectures. Focusing on the classic cases of English subject-verb agreement and auxiliary fronting in English question formation, I will demonstrate how neural networks with and without explicit syntactic structure can be used to test for the necessity and sufficiency of structural inductive biases, and will present experiments indicating that human-like generalization requires stronger inductive biases than those expressed in standard neural network architectures.