Davide Turco

General Profile:

I have a First-Class degree in Theoretical Physics from the University of Glasgow. My dissertation focussed a computational approach to solve the quantum chromodynamics theory, known as lattice QCD. My background is in scientific computing, in particular numerical methods and Monte Carlo simulations. I have also completed an internship at Max Planck Institute for Plasma Physics, where I collaborated on a code used for simulating the propagation of waves in turbulent plasmas. During this internship, I came across machine learning techniques for processing large amounts of data. I decided to join the Interactive AI CDT because I want to explore different AI-related research areas before embarking on my PhD project.

Research Project Summary:

Neural language models (LMs), such as LSTMs and Transformers, underpin several of the technologies that we use every day. While neural LMs are able to perform impressively on many linguistic tasks, there is no universally accepted theory of how these models process language. In particular, it is still unclear whether LMs understand and use explicit linguistic rules or they simply perform pattern matching and imitate the language they are trained on. On the other hand, the only other existing language-processing system, the human brain, appears to be more adept at learning language rules and apply them to new tasks.

In this PhD project, we will investigate to what extent artificial LMs mimic the brain’s response to language. Electroencephalography (EEG) data recorded from human subjects are used as a quantitative framework to test whether signatures of language processing in LMs explain features in the brain’s response. Previous studies involving EEG showed that recorded cortical activity tracks the responses to syntactical structures and semantic information.

Our preliminary work focused on the development of a proof-of-concept tool for aligning EEG and LSTM responses to the same auditory stimulus. Results of the experiments conducted with this tool showed a strong task dependence and the difficulty in interpreting correlations in the responses of the two language-processing systems. Improving and expanding this work, we aim to study the emergence of syntactical rules in LMs at the level of individual model units. Similarly, we will investigate whether the response to semantic information is similar to that of the human brain. Another aspect of language that could be explored is compositionality, i.e., the ability to generate higher-order concepts from simpler components. 

Bridging the gap between machines and human brain and their interpretation of language is important for both NLP and neuroscience, because it will contribute to the understanding of the mechanisms used by the two systems. Moreover, studying the individual components of a LM and their involvement in processing language structures could inform the development of new neural architectures, making them more human-like and thus improving the interaction between intelligent machines and humans.




Edit this page