Hosted by the Generalisation in Mind & Machine research group
Join via Zoom: https://bristol-ac-uk.zoom.us/j/99233643556?pwd=a2VmSVlQR2l0aDV2WHlNcFY5dXR0UT09
Meeting ID: 992 3364 3556 | Passcode: 168927
Abstract: Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they make novel predictions about neural phenomena or shed light on the fundamental functions being optimized. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We begin by reviewing the principles of grid cell mechanism and function obtained from first-principles modeling efforts, then rigorously examine the claims of deep learning models of grid cells. Using large-scale hyperparameter sweeps and theory-driven experimentation, we demonstrate that the results of such models may be more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience. In conclusion, caution and consideration, together with biological knowledge, are warranted in building and interpreting deep learning models in Neuroscience.
This work will appear at NeurIPS 2022
Full details can be found on the Mind and Machine website: https://mindandmachine.blogs.bristol.ac.uk/seminars/