Linking Linguistic Theory and Brain Dynamics with Deep Neural Models
Yai Lakretz (EHESS, Paris)
Seminar Room 4, Ada Lovelace Building
Hosted by the Neural Computation Hub
Abstract: Humans have an innate ability to process language. This unique ability, linguists argue, results from a specific brain function: the recursive building of hierarchical structures. Specifically, a dedicated set of brain regions, known as the Language Network, would iteratively link the successive words of a sentence to build its latent syntactic structure. However, a major obstacle limits the discovery of the neural basis of recursion and nested-tree structures. Linguistic models are based on discrete symbolic representations and are thus difficult to compare to the vectorial representations of neuronal activity. Recent advances in Artificial Intelligence (AI) can now help to address this gap. In AI, deep-learning architectures trained on large text corpora demonstrate near-human abilities on a variety of language tasks. These new language models are, like the human brain, based on vectorial representations, and thus provide new opportunities to understand complex neural computations underlying natural language processing and, in particular, recursive processing. In this talk, I will review our work from the past few years on recursive processing in neural language models, and then present a new methodological approach to bridge the gap between symbolic theories and vectorial representations.
Contact information
Enquiries to Conor Houghton